I have been using an iPad Pro as my primary computing device for more than a year now. It works pretty wonderfully for my use case. I just finished writing my dissertation on it last week, actually. I have an old MacBook that I use for specific tasks once or twice a week, but otherwise my iPad, pencil, and keyboard/mouse/5k monitor on my desk works really, really well — better than a laptop by far for my workflow: Zotero/Markdown/MSWord/Logseq/Obsidian.
I’m very happy to not use a general purpose computer anymore, to be honest.
I think the fact that you have to use a MacBook once or twice a week for specific tasks says everything.
The iPad Pro has been literally as fast as a MBA and supported KB/M input alongside touch and pen for years but you still run into weird edge cases where it's either too janky and time consuming to do on the iPad (e.g. file management) or some really basic app is just missing.
You know that RAM in these machines is more different than the same as "RAM" in a standard PC? Apple's SoC RAM is more or less part of the CPU/GPU and is super fast. And for obvious reasons cannot be added to.
Anyway, I manage a few M1 and M3 machines with 256/8 configs and they all run just as fast as 16 and 32 machines EXCEPT for workloads that need more than 8GB for a process (virtualization) or workloads that need lots of video memory (Lightroom can KILL an 8GB machine that isn't doing anything else...)
The 8GB is stupid discussion isn't "wrong" in the general case, but it is wrong for maybe 80% of users.
> EXCEPT for workloads that need more than 8GB for a process
Isn't that exactly the upthread contention: Apple's magic compressed swap management is still swap management that replaces O(1) fast(-ish) DRAM access with thousands+ cycle page decompression operations. It may be faster than storage, but it's still extremely slow relative to a DRAM fetch. And once your working set gets beyond your available RAM you start thrashing just like VAXen did on 4BSD.
Exactly!
Load a 4GB file and welcome the beach ball spinner any time you need to context switch to another app.
I don't know how they don't realize that because it's not really hard to get there.
But when I was enamored with Apple stuff in my formative years, I would gladly ignore that or brush it off so I can see where they come from, I guess.
It's not as different as the marketing would like you to think. In fact, for the low-end models even the bandwidth/speed isn't as big of a deal as they make it out to be, especially considering that bandwidth has to be shared for the GPU needs.
And if you go up in specs the bandwidth of Apple silicon has to be compared to the bandwidth of a combo with dedicated GPU. The bandwidth of dedicated GPUs is very high and usually higher than what Apple Silicon gives you if you consider the RAM bandwidth for the CPU.
It's a bit more complicated but that's marketing for you. When it comes to speed Apple RAM isn't faster than what can be found in high-end laptops (or desktops for that matter).
A year later I was doing bonkers (for the time) photoshop work on very large compressed tiff files and my G4 laptop running at 400Mhz was more than 2x as fast as PIIIs on my bench.
Was it faster all around? I don't know how to tell. Was Apple as honest as I am in this commentary about how it mattered what you were doing? No. Was it a CPU that was able to do some things very fast vs others? I know it was.
Funny you mention that machine I still have one of those laying around.
It was a very cool machine indeed with a very capable graphics card but that's about it.
It did some things better/faster than a Pentium III PC but only if you went for the bottom of the barrel unit and crippled the software support (MMX just like another reply mentioned).
On top of that Intel increased frequency faster than Apple could handle. And after the release of the Pentium 4, the G4s became very noncompetitive so fast that one would question what could save Apple (later, down the road, Intel it turns out).
They tried to salvage it with the G5s but those came with so many issues that even their bi-proc water-cooled were just not keeping up. I briefly owned of those after repairing it for "free" using 3 of them, supposedly dead; the only thing worth a dam in that was the GPU. Extremely good hardware in many ways but also very weak for so many things that it had to be used only for very specific tasks, otherwise a cheap Intel PC was much better.
Which is precisely why right after they went with Intel. After years of subpar performance on laptops because they were stuck at G4 (not even high frequency).
Now I know from your other comments that you are a very strong believer and I'll admit that there were many reasons to use a Mac (software related) but please stop pretending they were performance competitive because that's just bonkers. If they were, the Intel switch would never have happend in the first place...
It's just amazing that this kind of nonsense persists. There were no significant benchmarks, "scientific" or otherwise, at the time or since showing that kind of behavior. The G4 was a dud. Apple rushed out some apples/oranges comparisons at launch (the one you link appears to be the bit where they compared a SIMD-optimized tool on PPC to generic compiled C on x86, though I'm too lazy to try to dig out the specifics from stale links), and the reality distortion field did the rest.
Considering you commented on another one of my comments about the Apple "special sauce magic' RAM I can see how you could think that.
I'll check myself thanks, but you should check your allegiance to a trillion dollar corp and its bullshit marketing, that's really not useful to anyone but them.
... I think that the more correct assertion would be that Apple is a sector leader in privacy. If only because their competitors make no bones about violating the privacy of their customers as it is the basis of thier business model. So it's not that Apple is A+ so much as the other students are getting Ds and Fs.
The first major piece of code that I ever wrote was a publishing workflow management system for a major newspaper. It routed page images to presses and generated pdfs of each page in each edition of the newspaper and made pdfs of each day's newspaper editions and published them to a static website where they were archived.
I was only allowed to use perl 4 to write this software and I wasn't allowed to use a database, even though the datastructure for a day's publishing batch had tens of thousands of keys and values and required RDBMs style queries. It also features a configurable postscript parser that extracts all kinds of data from completed newspaper pages that informs the publishing system. When I wrote it I was told that it would run for a few months only while we figured out how to get a $5M commercial product to handle the work.
The whole thing was written in perl 4 style OO Perl and came in at about 16k lines of code in the end -- most of the code was for the postscript processor and tons of cruft that I had to write to made a relational DBMS in memory because I wasn't allowed to use mysql. It took me four months to write it. I launched it in January of 2002 and it runs to this day. I know this because I got a call about it last month where my replacement's replacement's replacement asked me a few questions about what OO Perl is because he wanted to make a few changes. Good luck! It still runs and is responsbile for about 80% of what it was originally built to do. It is used by hundreds of people daily, who by all reports absolutely hate it. There are people working at the newspaper today that use it regularly that were not born when I wrote it. I am twice as old as I was when I wrote it.
They have apparently tried to replace it several times over the last 22 years and have failed to do so... this is likely due to the blockheadedness of my old boss (who is still there) as much as the radically insane obscurity of my code, which is exactly how you'd expect 16k lines of 22 year old OO Perl 4 would be.
I created a database publishing platform exactly that time, a was given full control. I created it in Java, and added a JavaScript engine for scripting templates. The initial product was also side in a few months. It also runs to this day. The war file could be run by any Java server.
The code, while old, was in a reasonable state, given some of the migrations.
I never thought about the code being older than the people working on it
In Norwegian we have the (made up, but widespread) word 'Permasorisk' (a mangling of permanent and provisorisk) to describe a solution which is by no means meant to be permanent, but solves the original problem well enough that there no longer is a need for the permanent solution...
It was never a temporary 'fix'. The product eventually spun off into a new company and was recently acquired by a PE firm.
But yeah, creating software - or any kind of process actually - results in maintenance. If something is used a lot -> too many people depend on it. If the number of consumers/users isn't documented -> you don't know if you can turn it off, but for this problem you can inspect a year's worth of log files :)
I had a job once trying to help the government un-fuck a janky codebase that they were in the process of paying billions of dollars for. I remember opening a file that had some issues (IIRC it could only be compiled with a specific discontinued compiler) and seeing a comment that the file was created on my literal birthday. Was quite the surprise.
Great story ! I'm proudly responsible for a small perl script that pre-processes SWIFT messages before sending them in a back-office banking Java app. It's barely 10 yo but I'm not a dev, and I suspect it will probably run forever.
The idea is to split messages in chunks and sort the rows by ISIN instead of having one big file without any sorting pattern, and it cuts the processing by the app from a few hours to 10 min. Never heard of a bug in this one but it's only like 300 lines of code.
For Perl4? Not that likely, IIRC CPAN was started in '95, after Perl 5 came out which introduced most of the packaging semantics. I think Perl 4 basically had an "include" equivalent.
Don't remember anyone doing OO back then with it, either. Would be interested to know how that worked...
It's in moments like this that I miss the hacker news magazine[0], because I had relatively high certainty that when stories like this popped up, the poster in question would get interviewed and their story would appear in the next edition.
Not sure who else recalls it? Really appreciated the monthly format as well, I think summarising the last month and going in-depth on specific stories ended up being a good touchstone for what happened in the community.
If anyone is interested in picking up the torch, I for one would be happy to resubscribe to it :)...
> asked me a few questions about what OO Perl is because he wanted to make a few changes
Well that's pretty impressive. Most of my garbage pearl code from 20 years ago would probably look like line noise to most modern developers if it wasn't for the shebang.
i never wrote perl in any serious fashion, but i've always tried to use as much information in the source so i can figure out how i did it. I enjoy reading the submissions to code golf challenges, though. A few of the more esoteric language users have let slip they write a compiler to go from readable code to the target unreadable code.
Whenever i see terse C or load bearing perl it does look like line noise.
> The whole thing was written in perl 4 style OO Perl
What does that mean? Perl's basic OO support (using blessed hash references) was introduced with Perl 5 in 1994. I have no idea how anyone would even attempt writing OO code in Perl 4.
I architected/managed/maintained a perl web application starting in 2001 for a dot com. I still work with it today and the application/company recently sold to a new company for $24M USD.
> It is used by hundreds of people daily, who by all reports absolutely hate it.
This made me chuckle because as a software development intern in 1988 (being paid like $8 an hour) I wrote a giant monstrosity of a time tracking program for a local CAD company in an Empress database using its forms builder interface. It was in use for far longer than anyone ever intended and whenever I ran into someone from that company years later they would tell me how bad it was. Sorry guys!
It is basically the same with the books. The IA's book hosting system has DRM, yes, but it is Adobe's ADE which has been well and truly cracked. If you have a bit of patience and tech knowledge you can pull a full PDF of any book on IA in about 20 second's time. They could do a better job restricting access and I figure they wouldn't be in this much trouble.
The DMCA makes no distinction between 'weak' and 'strong' DRM. Bypassing any form of digital protection violates the act, and media companies have fought to defend their use of materially insufficient DRM such as DVD-CSS.
This is not simply a legal issue. As an author of books myself I have very little problem with libraries and online lending and I feel there is mutual benefit. I do not feel the same way about the way that IA is hosting commercial titles, which hews very close to ZLibrary, which I do not think is beneficial for all of us in the long term as it disincentivizes the production of books even if it makes things available to more people in the near term. We might disagree about that but my belief is pretty set.
Regarding legality, the media company battle re: DVD-CSS is quite different in quite a number of ways such that it doesn't simply apply here. That said, I'm not making a legal argument -- I am advocating for a system that makes sense to authors (whom we rely on to write the things that we value) and the greater good.
I suppose you could lobby Adobe to build stronger DRM. People will just break it again. There is no apparent technical solution to perfectly prevent users who can read the content from copying the content. Legal solutions are what you get.
The technical solution in this case is to not allow the ADE files to be downloaded -- they are not even available in the IA GUI -- there is a well-known backdoor that to my eye isn't needed to support any of the available features on the site.
Just to rephrase for clarity: the way that pretty much every college student that I know downloads many, many books from the archive is via an undocumented route on the website that isn't needed for the site's published borrowing functionality to work.
You will download a ADE stub file that is trivially deDRM'd.
IA's "solution" to a previous issue with publishers was to remove this functionality from their site. They pulled the links in the UI but not the functionality... I don't think they operate in good faith.
But... far be it from me to expect any dialogue where to be anything but creator hostile. To my mind that is very much short term thinking. We need to make sure creators are paid for their work because we want them to keep creating -- and for others to also create. Just because things are legal doesn/t make them right and/or for the greater good.
Right-clicking on that page and selecting "inspect" (or running the web session through a transparent proxy and inspecting the traffic in transit) seems to allow me to pull out individual pages of the book through the API calls used by the web reader. If they DRM'd the individual images it wouldn't be much more challenging. Every person will draw the line in a different place, and that is why the legal system exists.
Again, if you're allowing someone to view something, there is always a way to copy it. DRM and the legal frameworks around it (which we both seem to agree that IA has applied here) are the best available deterrent to copying.
The only way to be sure something will never be copied against your will is to never distribute it to anyone.
the logical conclusion here is that generation of reproducible content for profit or even for sustenance is innately impossible. this is neither true not desirable. enjoy pedantism on the internet!
Plenty of evidence to the contrary. Somehow I and everyone involved in all of this: https://en.wikipedia.org/wiki/List_of_major_Creative_Commons... manage to put food on the table. If your business plan requires enforcing impossibilities on the entirety of humanity, maybe consider reworking it before giving up?
Ebook DRM is easily removable, regardless of it it comes from IA, Overdrive or Amazon itself. If you're gonna be internally consistent you have to admit that you DON'T support digital lending and by extension, don't support libraries having a place in the future.
wow. just so, so many things that I didn't say and do not believe. Amazon fixed their DRM issues to a great extent and the gap is narrowing, Apple's Book DRM is currently unbroken, Overdrive's DRM bypass is only available via legacy versions of the app last I checked, etc.
You are the one that is asserting that 1. All DRM is inherently bypassable, which is simply not at all true. and 2. That anyone pushing for improvement is against lending, which is not true. This is what YOU are saying, not me. I am simply saying that if the IA wants to keep going (and I would like them to) they might close an unused http route. That is LITERALLY all I am asserting. Good lord, why even bother?
Specifically of issue in this case -- there is a wide-open hole in the IA's book hosting system that allows anyone to easily download an Adobe Digital Edition of any book in the archive -- ADE's DRM has been broken for a long time. It takes 2 minutes to go from IA to a full OCR'd PDF. The result is that there are a LOT of decrypted PDFs in the wild that carry the Internet Archive logo on the very first page.
I am for DRM because it could enable something like the IA to host and lend books without violating author's privledge. The IA's implementation is foolish and, I think, carries the intent to allow exploitation.
In this case, the IA has applied DRM, just as media companies do, and anyone breaking that DRM is the person liable for infringement, not the Internet Archive. Even breaking weak sauce DRM violates the terms of the DMCA, and media companies have fought to protect their use of such insufficient schemes. I see no reason that legal wrangling shouldn't also apply to the Internet Archive as well as it does members of the MPAA.
At least the Adobe Digital Edition is a nominal attempt at controlled lending. It's not a great attempt, but it's something.
The thing that really concerns me is the fact that people are treating IA a general file sharing site. You can find complete sets of ROMs for basically any game system you can think of, TV shows and movies that can be downloaded completely uninhibited, completely DRM-free versions of books.
To some extent data harboring laws will protect them, but that's not a silver bullet; if they're not making an active effort to deal with copyright stuff I really think it's going to bite them, and that's a shame. The IA is an extremely valuable resource when used right.
I worked at Huffpost through all three of these phases in a technical capacity adjacent to comments and moderation, as director of technical operations and eventually as head of engineering. This study has significant questions to answer about their methods and assumptions that is summed up here:
"Second, we know that HuffPo used both manual and algorithmic moderation in all three phases, but we do not know how the policies changed under the different identificatory regimes."
Given whay I know about Huffpo's moderation system and their statement that they don't have understanding of them I'd say that nothing reported in this study should be considered valid for a few reasons.
One is that Huffpost used many different systems of moderations following many changing standards throughout their days as a big news site (#3 in the US at one point) and the biggest news-based community, which they were for several years.
They started with no moderation, then human moderation with evolving standards and practices that were overseen by a brilliant community team. Then they bought Julia in 2010(? ish) which was a very early machine learning moderation system that was trained on millions of human made moderation decisions before launch and whose training and internals were constantly updated and improved for years. Julia was dropped for Facebook comments later on, at which point Facebook did most of the basica moderation but was still assited by human moderators.
My first critique of this analysis is that the authors have no data or understanding of the moderation actions that resulted in suppression of comments or users. How can an analysis make any claims without this data? So far as they know the comment-flow was more hostile during the periods they observe as more civil, but there were just far more comments suppressed. Just one of dozens of other internal details of the operation that would invalidate their conclusions would be that Huffpost quiet-deleted comments for a significant period of time -- meaning that you could post, and you would see your posts in context when you were logged in but no one else would see them. They also silent-banned users. This and other details of implementation create a great deal of complexity and secondary effects.
I can attest that moderation was very, very active and that lots and lots of comments were moderated down and out of the comment threads... again indicating significantly less civility than any retrospective analysis would be able to discern without all the data.
I also find it interesting that this study chose Huffpost for the analysis. At the site's hights of success and profit the comment threads were the reason for their SEO dominance and were considered to be the most important secret sauce. Huffpost moderation was the best in the business by a long measure. With the methodology presented it would make sense to me to say that Huffpost would appear to be the most civil of the big sites of the time. So it is interesting that this study focuses singly on Huffpost and reports that their theories indicate this differential.
While the authors do cover some of this in their section on Limitations, they don't cover near enough to justify their results... instead this reads as another cherry-picking study where authors had a theory and found a dataset that confirmed it while being unaware of fundamental reasons why that dataset was an outlier, making it impossible for them to build in the needed controls in their methods.
My local news website sort of went through a similar set of transitions and I don't know what the moderation activities behind the scenes were.
At first they had their own accounts to sign up for on the main website, there were definitely some unsavory characters and trolling but I'd say by and large it was just normal commenting. They announced that due to abuse or moderation issues (I can't recall which) they were switching to facebook commenting, which ostensibly has a real names policy.
A month later comments were removed from the website altogether. The only users left were some of the nastiest posters ever and didn't seem concerned about their real name being up there next to the consistently awful things they had to say, possibly because they were mentally ill. I know I had no desire to interact with them and using my real name on a site full of crazy people sounded like only something a crazy person would do.
Moderation is expensive and difficult and failing to do it well will kill your community dead. The evidence of this is pretty obvious today... only niche communities have active and positive engagement so far as I can see.
Cost: the demand was made to let the human moderators and the community team go after one of several aquisitions. The SEO advantage of having these threads had evaporated by that time so there wasn't a good argument against going with the free service provided by Facebook.
I’m very happy to not use a general purpose computer anymore, to be honest.