Thanks! On my cellphone not even enough of the UI was working for me to discover those URLs. I suspect a certain amount of error recovery is in order for wgetting all 2238 images. 2000 seems to be the maximum resolution available, which is under 100dpi. A few of the images seem to have been uploaded to https://commons.wikimedia.org/wiki/Category:Codex_Atlanticus.
I'm done downloading now (with a sleep of 1 second between pages), and I have 1064125470 bytes of JPEG files, a very reasonably torrentable size. I'll see if I can put together a torrent and upload to the Archive and Commons...
In this case presumably the main difference is not PowerShell vs. bash but iwr vs. wget? Because I think this is roughly equally bad (untested):
for page in {1..1119}; do
iwr "https://codex-atlanticus.ambrosiana.it/assets/2000/000R-$page.jpg" -OutFile "000R-$page.jpg"
iwr "https://codex-atlanticus.ambrosiana.it/assets/2000/000V-$page.jpg" -OutFile "000V-$page.jpg"
done
Also until recently bash didn't have {42..53} syntax. You had to use `seq`. There was an alternative name for `seq` in Unix Power Tools, `jot`, because it wasn't standard: https://docstore.mik.ua/orelly/unix/upt/ch45_11.htm. This section was by ORA author and sysadmin Linda Mui (https://www.oreilly.com/pub/au/268), but I don't know if she wrote `jot` or just popularized it.
I usually do what rarely doesn't work well for you, but it works decently for me. You get 1 page per image and the image isn't compressed or touched at all.
Maybe what rarely works well for NoMoreNicksLeft is having a gigabyte of JPEGs in a single HTML chapter inside the epub? In that case you could do something like divide the files into 373 "chapters" of 6 pages each?
One of the fragmentary editions I linked on the Archive uses the .cbr Comic Book Reader format; perhaps that is a better format than .epub for high-resolution scans of every page?