The Burroughs large system architecture of the 1960s and 1970s (B6500, B6700 etc.) did it. Objects were called “arrays” and there was hardware support for allocating and deallocating them in Algol, the native language. These systems were initially aimed at businesses (for example, Ford was a big customer for such things as managing what parts were flowing where) I believe, but later they managed to support FORTRAN with its unsafe flat model.
These were large machines (think of a room 20m square) and with explicit hardware support for Algol operations including the array stuff and display registers for nested functions, were complex and power hungry and with a lot to go wrong. Eventually, with the technology of the day, they became uncompetitive against simpler architectures. By this time too, people wanted to program in languages like C++ that were not supported.
The Photo (v2) app gives you a choice of using Apple’s converters or “Serif” converters. But, when last I looked, lens corrections were not available with the Apple converters.
Towards the end TFA claims that Apple’s bundled command line utilities, including zsh and vim, put their dotfiles in ~/.config. They don’t. They put them in the traditional BSD place, the user’s home directory ~/. Looking at mine now, I see .bash_login, .emacs (wow! that’s old), .lldb, .lldbinit, .vimrc, .swiftpm, .z{profile,env,rc} and a few others. I see no ~/.config directory.
My personal practice when writing command line utiities for macOS is to use the macOS API to write settings (previously known as “preferences”) into ~/Library/Preferences, so they can interoperate with any GUI versions of them I might like to write, and for the utilities themselves to have command line options to control those settings. As a sibling comment suggests, you do need to avoid name space collisions. You can use a reverse DNS name, or some companies are big enough just to use their company name. They do appear as .plist files but are actually maintained by a daemon – which nicely avoids race problems with multiple processes updating settings.
If I were writing a portable utility which had a dotfile with a documented format the user was expected to edit, I would make it a dotfile under the home directory.
~/Library/Application Support is really more for per-user static data, like presets and templates, things that are known in a GUI by a string in some dialogue but not necessarily thought of by the user as a file.
> My personal practice when writing command line utiities for macOS is to use the macOS API to write settings (previously known as “preferences”) into ~/Library/Preferences
This would mean that essentially all edits to the configuration must be performed by the CLI tool itself? Because macOS preferences aren't really intended to be edited directly by the user. That feels like a totally different category of configuration than what we're discussing here. Though it certainly provides some nice functionality.
Yes, ideally the tool should edit its own preferences. A quick and dirty tool might leave the user to run the macOS "defaults" command line utility themselves. It’s certainly no worse than looking up the dotfile format and firing up vi.
In what world is this user hostile behavior ideal? There is no way you can write a cli tool that is more ergonomic in editing plain-texty config format than a good code editor!
> It’s certainly no worse
It's much worse, you don't get persistent undo from previous edits, indication of current changed and committed data, syntax highlighting, regex search, etc, etc.
> It’s certainly no worse than looking up the dotfile format and firing up vi
It's a lot worse, IMO. We're all fluent in our text editor of choice, we all know the rough format conventions of dot files.
Whereas I'd certainly have to lookup man page of the defaults tool, and IIRC it only supports editing one-key-at-a-time, which sucks for a dense config file.
If your tool provides a TUI/GUI for editing preferences, that'd be a different story.
Yeah nah. If your configuration has a schema (e.g. JSON schema), AI tools can do a magnificent job turning free-form requests in natural language to valid configuration. Even if your configuration does not have a schema, people can refer to snippets of configuration made by other people on the internet.
Don't force your users to go through some custom sequence of CLI commands that they first have to spend 5 minutes learning. There should be no learning curve for a one-time setup.
Vim, Emacs, and Swift Package Manager do in fact support XDG. Also Git as well. But Vim doesn't create config files by itself and Emacs is no longer distributed by Apple.
The article suggests a “simple, well-labeled rotary control ... would accomplish the same function” as a power button and “prevent the user from accidentally activating the control in a way that is no longer hidden”. But a rotary control itself has a serious problem, in that it can mislead the user as to the state, on or off. If the power has failed and the machine does not restart when it comes back, the rotary control will remain in the ON state when the machine is off. From memory, Donald Norman called this kind of thing “false affordance” and gave the example of a door that needed to be pulled having a push-plate on it.
So my iMac, among many other devices like the light I wear on my head camping, has a button which you long-press to turn on. It is a very common pattern which most people will have come across, and it’s reasonable to expect people to learn it. The buttons are even labelled with an ISO standard symbol which you are expected to know.
As a longtime user of ancient Macs, I find the modern iMac's longpress-powerbutton frustrating. It usually takes me multiple tries to figure out if I pressed it long enough, or too long, or not enough. When I have pressed it just right, the iMac still doesn't respond immediately in a visible or audible way: there's just enough delay for me to start fiddling with the power button again.
Compare to the momentary buttons on ye olde Macs that existed on both the case and the keyboard, which gave immediate feedback. The one on the case also had a longpress action, but only as an override.
"Nope, didn't want this on right now. Bumped it by accident. Long-press!" is easier for me to get behind.
If the power has failed and the machine does not restart when it comes back, the rotary control will remain in the ON state when the machine is off.
A better example may be a solenoid button, used on industrial machinery which should remain off after a power failure, which stays held in when pushed, but pops out when the power is cut. They are not common outside of such machinery, because they're extremely expensive. In the first half of the 20th century, they also saw some use in elevators: https://news.ycombinator.com/item?id=37385826
I have never looked at a fan that isn't running and been confused by the switch being set to “on”. The affordance is that it immediately tells me that the switch is on, so the problem is somewhere else. Compared to the typical phone's “hold for 3 seconds to turn on, hold for 10 seconds to enter some debug mode”, this is a breath of fresh air when anything unusual is going on with the device.
I live in a country where the socket on the wall the fan is plugged into also has a switch, which could be on or off. So to make the fan go around, both switches must be on; the user needs to know about and have a mental model of serial circuits.
If it’s just a button the user just has to know two things: turn the switch on at the wall socket when plugging in, which becomes habit since childhoood; and press and hold the button on the fan to make it go, which I suspect most children in 2025 can manage. These two things don’t interact and can be known and learned separately.
As you said, the knob’s position tells you about the switch. But it’s the fan the user is interested in, not the switch.
(BTW, if the fan has a motion sensor you can’t tell it’s off by the fact the blades aren’t turning. There’s probably a telltale LED.)
There’s a little-used alternative to through-running as described in TFA, which is to turn trains around in an underground loop in the city centre and send them out again, usually on the same line they came from. This is done in Sydney (1920s) and Melbourne (1970s).
In the case of Melbourne, the design was chosen because of the extremely lop-sided development of the city, whose centre (“the city”) lies at the head of a bay, with most of the population and population growth to the south and east of the city in the 1970s. This is true to a lesser extent in Sydney, bearing in mind that its city loop was designed before the Harbour Bridge was built.
Melbourne’s current through-running project, the Metro Tunnel, appears in the first table, but it doesn’t belong in the second table of cities which could greatly benefit from future through-running projects. All 16 lines can access the city loop (which has 4 parallel tunnels). Due to capacity constraints, most of the time one of those 16 lines terminates at a main city station, and two others through run with each other. The Metro Tunnel will relieve the city loop, similar to the Munich trunk amplification project described in TFA; once it opens all lines will either through-run or go around a loop.
Locals are used to the City Loop but some visitors find it hard to navigate. Each line runs either clockwise or anticlockwise. For historical reasons two of the four loops still reverse direction in the middle of the day.
It’s true that a further through-running tunnel (Metro Tunnel 2) and conversion of two of the four City Loop tunnels to through-running (City Loop Reconfiguration) appeared on the long-range plans from 2012. But these are not required yet; the city is building three other major rail projects first.
It could also be invoked on VAX/VMS under the name “MUNG”, which was said to stand for “Modify Until No Good”. I think that style of invocation did something slightly different than using the TECO command, like creating a new file.
We had TECO on our DEC VAXes running VMS in the early-mid 1980s. It had a ”VT52” mode (as you say, a macro), and at least one of the terminals on my desk supported those escape sequences. Wikipedia says the VT52 terminal was made from 1975 to 1978, so those macros were probably fairly early. By this stage, TECO distribution was fragmented with various incompatible versions around, so probably some lacked that macro or other full-screen macros.
Although I had a terminal which could run TECO full screen, I found that too slow and just used it in line mode. You could conveniently reprint surrounding lines by adding a few characters to the end of a commmand (I still have HT <ESC> <ESC> burned into my brain.) The VT52 macro had you typing commands into line 24 like an emacs minibuffer.
I never used it for all my editing, but it excelled at certain things.
The version of TECO we had was the one which shipped with VMS. At some point later on, I think, DEC stopped shipping it, and we migrated to a TECO-inspired full-screen editor developed by another university. Once that arrived, we hard-core TECO users, all 4 of us, were won over within a week.
A lot of the "elite" compsci students at my college in the early-ish 80s were still using TECO on our DECSYSTEM 2060 but some of the cool-kids were trying that new Emacs thing ;)
Me, being just a lowly compsci minor, prefered the full screen editor called FOXE. Was very simply to use and did the job fine for writing/editing programs of the length typical of homework assignments. Don't recall all the particulars to comment on search, replace, etc.
Unfortunately, there is like zero internet info on it beyond: "Little (if any) information is available for this visual editor available for TOPS-20 in the early 1980's. It was similar in appearance to the then new EMACS but had a far simpler command structure."
The tunnel is signalled only for CBTC. The outer ends of the line have only the old fixed-block signalling. The section from Westall in the south east to (eventually) Sunshine in the north has both. There is a dedicated model of train, the HCMT, which is fitted with the CBTC and is the only type that can run through the tunnel during normal operation. When an HCMT passes Westall heading towards the tunnel, the lamps on the fixed block signalling ahead of it go out. When a diesel regional train has HCMTs ahead of it on the next fixed block, it will see the lamps lit for Stop.
Trains runnin have been running this way in normal passenger service for months now, though I think it’s only in the last few weeks they’ve been testing it with HCMTs entering the tunnel at Hawksburn with diesels behind them instead going to South Yarra. (Currently, the trains with actual passengers take the South Yarra route at that junction.)
The HCMTs run only on this line, from Sunbury to East Pakenham/Cranbourne. They have their own new depot at East Pakenham so they don’t have to go elsewhere for stabling and maintenance. And they probably can’t; just for them to go through the City Loop as is temporarily required that tunnel had to be resignalled (the old location wasn’t visible from the cab.)
They did run an old Comeng train through the Metro Tunnel, without any signalling, to test clearance for a track maintenance train.
I worked on this system during its development in the 1980s.
There were actually two PDP-11s, the one to run the platform displays running locally-written software.
The “weird control sequences” sent between the PDP-11 and the plaform displays were HDLC, a synchronous protocol then common in IBM token ring networks as SDLC. This was actually a decent technical solution because they only had to run one coax cable down each train line and the PIDS could sit there watching for their token slot. The hardware for HDLC would have been commodity, whereas fibre optic or carrier sense for long-run packet was not.
The other PDP-11 that ran the signals (the “train describer”) could plot the position of trains on glass TTY terminals using escape sequences (VT100, same as in xterm today) so our PDP-11 pretended to be one of those and screen-scraped. So I was told, when I asked the guy who wrote it.
All of that was done before I got there. I was called in with 6 weeks to go before commissioning to fix the bit in the middle that recalculated train positions and arrival times.
These were large machines (think of a room 20m square) and with explicit hardware support for Algol operations including the array stuff and display registers for nested functions, were complex and power hungry and with a lot to go wrong. Eventually, with the technology of the day, they became uncompetitive against simpler architectures. By this time too, people wanted to program in languages like C++ that were not supported.
With today’s technology, it might be possible.