"I, a notorious villain, was invited for what I was half sure was my long-due comeuppance." -- Best opening line of a technical blog post I've read all year.
The narrator's interjections were a great touch. It's rare to see a post that is this technically deep but also so fun to read. The journey through optimizing the aliasing query felt like a detective story. We, the readers, were right there with you, groaning at the 50GB memory usage and cheering when you got it down to 5GB.
Sweeteners are processed food. Timeline shows more processed food hitting the market, period. Obesity rises. Coincidence? Doubt it.
It's not just the sweetener itself. It's the whole shift. More processed crap in everything, sweeteners included. Cheaper, easier, engineered to be addictive. That's the real change that lines up with the weight gain.
Focusing just on sweeteners is missing the point. They're just one piece of the bigger processed food takeover. That's the simpler, more likely explanation.
> Broadly speaking, we eat a lot more than we used to: The average American consumed 2,481 calories a day in 2010, about 23% more than in 1970. That’s more than most adults need to maintain their current weight
Totally might be the case. I'm not sure, but I think I've seen good reasons to think this doesn't exactly line up though - increased wealth in different countries didn't match exactly, timeline wise.
"Processed food" is a term without meaning. Amost all food is processed. Yogert is processed. Bread is processed. Steak is processed. Even raw fruit is arguably processed as it is picked before being ripe to eat and then subject to an optimized ripening process (google the science behind banana shipping). All foods are either cooked or mechanically/chemically processed prior to consumption. We are aguably unable to survive on unprocessed food. Short of biting into a whole head of lettuce, or into the side of a live animal, one cannot avoid processed foods. Washing/cooking has saved us from all manner of paracites. The people who eat raw/unprocceed foods are the ones who wind up with worms in thier brains. What matters for health is the degree of processing that does not add nutrition or safety, with every pundit picking thier own arbitrary point somewhere between a healthy chopped salad and a microwaved hot pocket. Imho, just avoid anything with added sugar or salt.
"Ultraprocessed" is arguably the more important term. While also formally defined, a rule of thumb is that if the average person can't make it in their kitchen, it's ultraprocessed. These are chemicals chemicals that are used to emulsify or stabilize ingredients, preservatives, and chemicals used to improve mouthfeel and texture: like lecithin, polysorbate, sodium benzoate, maltodextrin, partially hydrogenated oils, sodium phosphates, etc. — there are tons of them. Some of them have been implicated in causing gut inflammation.
Yes, really. Nova's definition is exactly what I'm describing earlier. Here is their own definition, from the wiki you linked:
> There is no simple definition of UPF, but they are generally understood to be an industrial creation derived from natural food or synthesized from other organic compounds. The resulting products are designed to be highly profitable, convenient, and hyperpalatable, often through food additives such as preservatives, colourings, and flavourings.
People are worried AI is making us dumber. You hear it all the time. GPS wrecked our sense of direction. Spellcheck killed spelling. Now it’s AI’s turn to supposedly rot our brains.
It’s the same old story. New tool comes along, people freak out about what we’re “losing.” But they’re missing the point. It’s never about losing skills, it’s about shifting them. And usually, the shift is upwards.
Take GPS. Yeah, okay, maybe you can’t navigate with a paper map anymore. So what? Navigation isn’t about memorizing street names. It’s about getting from A to B. GPS makes that way easier, for way more people. Suddenly, everyone can explore, find their way around unfamiliar places without stress. Is that “dumber”? No, it’s just… better navigation. We optimized for the outcome, not the parlor trick of knowing all the streets by heart.
Same with the printing press. Before that, memory was king. Stories, knowledge – all in your head. Then books came along, and the hand-wringing started. “We’ll stop memorizing! Our minds will get soft!” Except, that’s not what happened. Books didn’t make us dumber. They democratized knowledge. Freed up our brains from rote memorization to actually think, analyze, create. We shifted from being walking libraries to… well, to being able to use libraries. Again, better.
Now it’s AI and coding. The worry is, AI code assistants will make us worse programmers. Maybe we won’t memorize syntax as well. Maybe we’ll lean on AI to fill in the boilerplate. Fine. So what if we do?
Programming isn’t about remembering every function name in some library. It’s about solving problems with code. And AI? Right now, it’s a tool to solve problems faster, more efficiently. To use it well in its current form, you need to be better at the important parts of programming:
- Problem Definition: You have to be crystal clear about what you want to build. Vague prompts, vague code. AI kind of forces you to think precisely.
- System Design: AI can write code snippets. As of right now, designing a whole system? That’s still on you. And that’s the hard part, the valuable part.
- Testing and Debugging: AI isn’t magic. At least, not yet. You still need to test, validate, and fix its output. Critical thinking, still essential.
So, yeah, maybe some brain scans will show changes. Brains are plastic. Use a muscle less, it changes. Use a new one more, it grows. Expected. But if someone’s scoring lower on some old-school coding test because they rely on AI, ask yourself: are they actually worse at building software? Or are they just working smarter? Faster? More effectively with the tools available today?
This isn’t about “dumbing down.” It’s about cognitive specialization. We’re offloading the stuff machines are good at – rote tasks, memorization, syntax drudgery – so we can focus on what humans are actually good at: abstraction, creativity, problem-solving at a higher level.
Don’t get caught up in nostalgia for obsolete skills. Focus on the outcome. Are we building better things? Are we solving harder problems? Are we moving faster in this current technological landscape? If the answer is yes, then maybe “dumber” isn’t the right word. Maybe it’s just... evolved. And who knows what’s next?
After watching [Oxford Researchers Discover How to Use AI to Learn Like a Genius](https://youtu.be/TPLPpz6dD3A?si=FJJ-S6wz0PPrJuSn) a few days ago, I've been using ChatGPT in "reverse mode" a lot. I give it a excerpt of a text I'm reading and ask it to ask me questions from it at different levels of detail.
I have to say it feels like a superpower! The answers to questions that you needed to supply really stick on your memory as do the links that spontaneously form to bodies of knowledge you already know when answering deeper level questions.
I'm thinking that LLMs might actually address some of Plato's complaints against reading and writing:
> You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.
What do you do with that knowledge? I mean, it's a good skill if you're actually going to apply it to something. Knowing you're facing north is great if there's some use you get out out that. Finding your way around is way more work than just "north is that way".
I remember the first time I traveled internationally. Incredible stress. I had a bunch of printouts with all the details. Still, I depended completely on the friend I was visiting. I literally didn't dare leave his house without him because I wasn't confident I could make it back without having to have him rescue me and didn't want to give him that trouble. Sure I had a bunch of info, but one wrong decision and my static stack of papers might not be enough to get me out of it.
Technology made an absolutely amazing difference. With GPS I could wander around a city aimlessly and still find a way to my hotel. I could figure out where the center was. The early incarnation was rough but amazing for the amount of stress relief it provided.
And modern tech? Just sci-fi magic. I can see both the usual sights and find various obscure ones and any business I might need. With Uber I could get a trip in random countries wherever needed without speaking the local language. Google now tells me about bus and tram routes, tells me what to take, where the station is, and what stations am I going to go through, and when I'm going to be there. There's a magic real time translator for both text and voice.
> Sure I had a bunch of info, but one wrong decision and my static stack of papers might not be enough to get me out of it.
If you knew how to read a map it would have been.
I can't imagine being so dependent on a phone which can be accidentally dropped, misplaced, or simply run out of battery charge that I would be lost without it.
Paper has limited information. If you failed to acquire a map with the relevant useful information you have a problem. Not every map contains enough information for every possible need. If you planned on going by car then improvised and took a bus you might not even have bus stops marked on it.
> I can't imagine being so dependent on a phone which can be accidentally dropped, misplaced, or simply run out of battery charge that I would be lost without it.
Same way you deal with anything else: what if you have a car problem? So you plan ahead. Get the car checked before a trip, fill the tank, figure out what to do if it does break.
Phones are easier. I've got a stack of old ones that are still functional, easy to bring an extra one. I have an external battery. You can charge in many cafes and similar, just find a Starbucks or something. You can go to a shop and buy a battery or the cheapest phone they have if it comes to that.
AI can program, but not engineer. Even then, you eventually reach a point in the project where it is too complex for AI to even do snippets; especially if you are pioneering something new that has never been done before.
A sprinter is unlikely to win a marathon, and that is what using AI to program is like. By the time you have to take over, you have a huge learning curve ahead of you as you can lean on the AI less and less.
If you're doing something boring/boiler-plate, yeah, AI is helpful I guess.
Most people with "engineer" titles spend relatively little of their time on actual quantitative engineering or "higher level" thinking. A lot of their work involves manual information processing: Organizing and arranging things, fitting things together, troubleshooting. This could be justified for a couple of reasons: Maybe a lot of the stuff that was "engineering" is now handled by the CAD software. That's great. But also, the efficiency of those tools has raised the complexity of systems to the point where the interaction between parts consumes most of the engineers' attention.
Managers also spend most of their time on the same things, but handling different kinds of information.
But CAD hasn't changed the immutable laws of engineering, such as Brooks's Law. When I hear about the wonders of AI transforming engineers into higher level thinkers, my snarky response is: "Does this mean that projects will finish on time?"
If your engineers (software or otherwise) aren’t spending a lot of time engineering, then you’ve got a hiring problem. Most jobs I’ve worked as a software engineer are 90% engineering (soft and hard skills) and only about 10% programming. With AI, it becomes about 60% engineering and 20% babysitting an AI, and 30% programming because the AI got it wrong.
Now, we can’t even hand this stuff off to juniors and teach them things they’ll hopefully remember. Instead, I have to explain to an AI, for the 60th time, that it has hallucination problems.
I feel like that's what the OP said. People can focus on the engineering part and not memorizing syntax or function names.
Too often I see people thinking in very binary terms, and we see it here again. AI does everything or nothing. I just keep thinking it'll be in between and people who are good at leveraging every tool at their disposal will reap the largest benefits.
You don't need AI if that's all you're using it for. In fact, IDEs have been doing a fine job at that for years.
It feels right now, that much of the time, AI is a solution looking for a problem to solve.
I find it more useful to treat AI like an easier to search stack overflow. You can ask it to go find you an answer, and then elaborate when it's not the right one.
This is dead on. I'm not even a big AI fan, but this is a key idea about technology in general. I don't want to have to bring to mind the laws of physics every time I drive to work. The whole point is that a group of engineers encoded them into the machine for me, and now I enhance my capabilities without needing to know how. It's what the classic Alfred North Whitehead quote is talking about. I understand the impulse towards mastery and ever-expanding knowledge—who doesn't love the idea that they should be able to "plan an invasion, change a diaper, butcher a hog"—but the truth is we have finite capabilities we are capable of mastering in a lifetime. This is why even literal geniuses often fail when they step outside their field of expertise. It's a valid concern that as a society some skill will be lost, or concentrated in the hands off too few, but losing skills and knowledge (or I would simply call it "being permitted to forget") is in general fine. Now if AI literally killed people's ability to think, that would be one thing. But what I suspect is that like parent is saying, it allows you to turn off your brain for certain tasks, like every technology. Then the question is what more complex tasks can we do on top of the automatic and thoughtless ones.
EDIT: I see some good replies to parent about stability/reliability, alienation etc. There are definitely tradeoffs to the power you get from technology, and it's worth acknowledging those. But that's exactly the framework we should be thinking in. What are the tradeoffs involved? Often these kinds of stories are one-sided arguments that imply losing skills is straightforwardly bad, when in truth it's more complicated than that.
> so we can focus on what humans are actually good at
You know what humans are good at? Deluding ourselves. Because that's what you're doing. Using vague, feel-good words, based on vague analogies, no proof, to keep the inconvenient truth at bay. Not being able to navigate with a paper map is a big thing: people get lost inside buildings without a map. Next time the power fails, half of the GenZ will be lost. Not being able to write with pen and paper is a big thing. Not being able to add a few number is not as big a thing, but it certainly can be a problem. And what for? So you don't have to feel bad about using AI tools?
You know what comes next? Everything based on audio and video. Are you then going to argue: reading is an obsolete skill?
> It’s the same old story. New tool comes along, people freak out about what we’re “losing.” But they’re missing the point. It’s never about losing skills, it’s about shifting them. And usually, the shift is upwards.
Except for that widespread feeling of hopelessness, alienation, powerlessness, lack of motivation, and lack of ambition. Almost as if not learning any human skills, and relying solely on technology for everything might have some second-order effects.
I've thought the same way you did, how people are resistant to change, but eventually it's better for everybody.
I do believe that GPS made people worse drivers. It made it so people lost sense of direction and distance. It has removed all critical thinking on the road. Plenty of stories where people drive over stairs because the GPS told them so.
From a driver's ability to navigate, I don't think you can do more now with GPS than you can before with a map. It surely has made it easier, but at a significant cost.
Now, of course, there are plenty of benefits, such as reducing time when getting somewhere unknown (e.g. ambulances), planes not flying over hostile territory (mostly), ability to be able to tell someone where you are when there no landmarks around etc.
But the reality is that overall, a mistake of a GPS is usually rather localized, and the cost of the mistake is rather low.
Books are interesting, instead of memorizing details we now memorize where to find information, little bits that help us get to the solution of the problem we're solving. But books themselves haven't replaced memory, otherwise no-one would read them anymore ahead of time.
When we search something on the internet we are thought to apply critical thinking. What are the sources? [0]. But GPS? Just go with it.
And AI is more like GPS than it is like books. We are being taught take it at face value, and to abandon critical thinking for the sake of speed. Worse yet, because of the enormous financial investments of companies, there is an incentive to lie about how useable it is.
I'm not even talking about context windows. I'm talking about the endless minutia of languages, frameworks, and changes related to specific versions that you only learn by doing. Just the same way a resident does not become a doctor until they finish residency. They have to have done the work, and applied critical thinking.
Software Engineering does not have such legal requirements, but we all learn on the job. AI, and the companies pushing for it basically tell potential clients that this is no longer needed. Would you want a gallbladder surgery done by someone who just read a Wikipedia page about it?
Now, a seasoned developer who writes a crystal clear prompt will probably pick up on bugs, and tell the AI that they want edge cases A, B and C considered. But how did they learn that those exist? Right. By hitting the issues.
Something that happens a lot in Software Engineer, due to the massive amount of things out there and no fixed specs/docs/etcs, is that your approach changes when you're developing a solution for a problem. But the need for those changes only become apparent when you're writing and testing code.
You literally cannot front-load that into your prompt. Yet, reading the news here, we see that our future is writing prompts for a much lower wage. This is orthogonal as to why I went into Software Engineering. Prompts rob me of the ability to express something in an extremely well defined language. Clarity of rules. A syntax where you can express something without ambiguity [1].
You don't know what you don't know, meaning you can't prompt for what you don't know. Hence why they brought back a whole bunch of people out of retirement to build new manpads.
[0] Interestingly when I was growing up a book quote was ok, but Wikipedia was not, even though it came from a book. That now definitely has changed.
[1] A wife sends her programmer husband to the grocery store for a loaf of bread...
On his way out she says "and if they have eggs, get a dozen". The programmer husband returns home with 12 loaves of bread....
I'm fine with and approve the usage of LLMs in academia, as long as they provide genuine value and something new to the field. These tools should be embraced when they can augment human intellect.
However, I draw a firm line at using them to generate complete academic works or nonsensical content, as that undermines the integrity of research and renders it devoid of originality. LLMs should serve as invaluable assistants to free up scholars for higher-order analysis, not as replacements for human ingenuity.
You may as well say that you only want dynamite to be used for nonviolent purposes, like demolishing condemned buildings. It doesn't work that way, which is exactly what AI ethicists and researchers have been warning about - AI should be regulated, because once its in somebody's hands, you don't control how they use it, and the repercussions could be much wider than people imagine.
IMO academic papers are too long. The decorum dictates that a core of real, novel content must be surrounded in tons of fluff. Most people don't ever read that fluff (some do, and it's not totally useless, but not to most people).
It doesn't actually bother me if some of that fluff is ChatGPT-generated, provided the author actually read it and accepts the autogenerated content.
I'm having flashbacks from writing my Master's thesis. The experimental part was done in two weeks, then I spent a week describing the results, then a few months going from 15 pages to 80 pages and at least 30 citations, with the latter being surprisingly difficult because of uniqueness of my research topic.
Buckle up, because we're not far away from autonomous research agents. It'll start with computational analysis, plot generation and creation of data discovery documents, but soon models will learn how to request simple physical experiments (sequencing some DNA, mass spec a sample, etc) and plan research based around the outcome.
The problem here is not necessarily the use of chatgpt at all. It's that academic idiom requires content-free linguistic noise for acceptance, and people are quite naturally turning to mechanistic routes to automate out the drudgery.
Perhaps you are making a joke that flew over my head, but in case you are new here, that not the way that this place works. You are not going to get banned for repeating the jokes that got someone fired, especially just after they won a court case saying they were actually funny. Your comment may well get flagged, because flagging is done by users, some of whom will flag just about anything they disagree with. But bans are done by Dan, and he's a good moderator who doesn't ban people for silly reasons like this. Have some faith.
A lot of people don’t realize that caffeine is not the only substance that affects our body when we drink coffee.
There is also paraxanthine, which is a metabolite of caffeine that has a similar half-life and similar effects.
Paraxanthine can increase lipolysis, which means it breaks down fat and releases fatty acids into the bloodstream. It can also enhance alertness, mood, and cognitive performance.
So, even when the caffeine levels in your blood start to drop, the paraxanthine levels are still high and keep you stimulated. That’s why the effects of coffee can last much longer than you think.
I think that the research is flawed and based on faulty assumptions. The origin of human lip kissing is much older and more widespread than the researchers claim. It is a natural expression of affection and intimacy that evolved independently in many cultures and regions. The herpes simplex virus 1 is not exclusively transmitted by kissing, but also by other forms of contact and exposure. The correlation between kissing and herpes is not causal, but coincidental.
The narrator's interjections were a great touch. It's rare to see a post that is this technically deep but also so fun to read. The journey through optimizing the aliasing query felt like a detective story. We, the readers, were right there with you, groaning at the 50GB memory usage and cheering when you got it down to 5GB.
Fantastic work, both on the code and the prose.