A file is arguably N resources, where N is the number of granular elements of your file. Probably pages, but could be blocks or bytes too depending on where in storage it's canonical copy lives.
Point taken. In any case, single-writer helps with concurrency and consistency. For V7 Unix, I think one solution could be to split the file into blocks, each with a single writer. Admittedly this makes reads of the whole file more costly, so one has to be careful about the block size.
I imagine a more serious bottleneck to heavy random-access concurrency on files large enough to be worth splitting would be disk performance, as there's not likely to be a lot of room in physical RAM on a busy PDP-11 or VAX-11/7xx free for buffer caching.
And for smaller files able to be serviced entirely from cache, I don't imagine lock contention as a serious issue on a single-processor system like the ones that ran V7/32V.
Average seek time / rotational latency (years estimated by manual copyright):
RL02 (1978): 55 ms / 12.5 ms
RK07 (1978): 36.5 ms / 12.5 ms
RA81 (1982): 28 ms / 8.3 ms
RA92 (1989): 16 ms / 8.3 ms
Note that the RL02 (and V7) and RA92 mentioned in the article are separated by about a decade.
But those N resources are not independent, which is why it makes sense to think of them as one singular resource. For example, if you were to remove the first byte from the file, or prepend a byte, all the other bytes' addresses would change. And arguably, we already have an interface to logically group different filesystem objects together yet retain the ability to address them individually: it's called a directory.
I'm sure you could define a more granular format to address all elements within a file individually (for example, a lot of files in a typical Unix /etc directory have rows and fields), but people would call that interface an object store rather than a filesystem.
They are independent, just their assignment to addresses is not. That's why databases still gain from having multiple writers with the exception of SQLite.
But those interfaces [to allocate bytes/pages before other bytes/pages, allocate bytes/pages in the middle of bytes/pages, to prune bytes/pages before other bytes/pages] to manipulate said parts are non-portable between *NIX implementations, usually limited to certain filesystems with specific implementation-specific minimum byte sizes.
So like how the OS pretends filesystems are in a tape, the filesystem pretends that the files are in a tape.
And tape splicer is optional with a not-standard interface (logical volume management, fallocate, etc).
Then don't do that part, allocate at the end and only rewrite existing blocks rather than deleting them. It's like how just because you can't easily sbrk(2) new memory before at the beginning of a heap doesn't mean you can't have multiple threads despite that being the resource you're contending.
Well, V7 UNIX is rather arbitrary as a reference point for comparison. There was what we now call as V1 UNIX, which would be circa 1971/72, so that's certainly 50+ years old. Compared to what's shown in the article, some of the data structure fields and constants would be different and/or missing in the still earlier versions. DIRSIZ would be 8, for example.