The higher the risk of e.g. a loan, the more interest it has to pay out to be worthwhile. The exact amount* is, as I understand it, governed by the Black–Scholes model.
* probably with some spherical-cows-in-a-vacuum assumptions given how the misuse of this model was a factor in the global financial crisis.
isn’t this easy for a potential attacker to mitigate, i.e. dropping from the address everything after the plus? it’s a known trick for gmail so i would not be surprised if an attacker knew how to get to the “real” address by cleaning it up.
Yes, even some attackers I noticed they excluded all custom domains from their dumps to avoid alerting individuals before they sell it. It’s why it’s better to have a fully unique email, preferably masked one (not custom domains) as some email services provider do, so you get the isolation feature but also blending in without going noticed by attackers.
I disagree with this - there have been overthrowings that did not require weapons in the field (i.e. Egypt, Tunisia), while widespread weapons were likely to cause civil wars (Lybia, Syria). In these cases however the role of the army was key in forcing the rulers out (and in Egypt to replace them), which might be unlikely in the case of Iran.
I use the house analogy a lot these days. A colleague vibe-coded an app and it does what it is supposed to, but the code really is an unmaintainable hodgepodge of files. I compare this to a house that looks functional on the surface, but has the toilet in the middle of the living room, an unsafe electrical system, water leaks, etc. I am afraid only the facade of the house will need to be beautiful, only to realize that they traded off glittery paint for shaky foundations.
To extend your analogy: AI is effectively mass-producing 'Subprime Housing'.
It has amazing curb appeal (glittering paint), but as a banker, I'd rate this as a 'Toxic Asset' with zero collateral value.
The scary part is that the 'interest rate' on this technical debt is variable. Eventually, it becomes cheaper to declare bankruptcy (rewrite from scratch) than to pay off the renovation costs.
My experience with it is the code just wouldn't have existed in the first place otherwise. Nobody was going to pay thousands of dollars for it and it just needs to work and be accurate. It's not the backend code you give root access to on the company server, it's automating the boring aspects of the job with a basic frontend.
I've been able to save people money and time. If someone comes in later and has a more elegant solution for the same $60 effort I spent great! Otherwise I'll continue saving people money and time with my non-perfect code.
In banking terms, you are treating AI code as "OPEX" (Operating Expense) rather than "CAPEX" (Capital Expenditure).
As long as we treat these $60 quick-fixes as "depreciating assets" (use it and throw it away), it’s great ROI.
My warning was specifically about the danger of mistaking these quick-fixes for "Long-term Capital Assets."
As long as you know it's a disposable tool, not a foundation, we are on the same page.
I remember a very nice quote from an Amazon exec - “there is no compression algorithm for experience”. The LLM might as well do wrong things, and you still won’t know what you don’t know. But then, iterating with LLMs is a different kind of experience; and in the future people will likely do that more than just grinding through the failure of just missing semicolons Simon is describing below. It’s a different paradigm really
Of course there is - if you write good tests, they compress your validation work, and stand in for your experience. Write tests with AI, but validate their quality and coverage yourself.
I think the whole discussion about coding agent reliability is missing the elephant in the room - it is not vibe coding, but vibe testing. That is when you run the code a few times and say LGTM - the best recipe to shoot yourself in the foot no matter if code was hand written or made with AI. Just put the screw on the agent, let it handle a heavy test harness.
this is a very good point, however the risk of writing bad or non extensive tests is still there if you don’t know what good looks like! The grind will still need to be there, but it will be a different way of gaining experience
There will still be a wide spectrum of people that actually understand the stack - and don’t - and no matter how much easier or harder the tools get, those people aren’t going anywhere.
Compression algorithms for experience are of great interest to ML practitioners and they have some practices that seem to work well. Curriculum learning, feedback from verifiable rewards. Solve problems that escalate in difficulty, are near the boundary of your capability, and ideally have a strong positive or negative feedback on actions sooner rather than later.
While I don’t agree with you, I keep a healthily skeptical outlook and am trying to understand this too - what is the hard data? I saw a study a while ago about drops in productivity when devs of OSS repos were AI assisted, but sample size was far too low and repos were quite large. Are you referring to other studies or data supporting this? Thanks!
I, individually, am certainly much more productive in my side projects when using AI assistance (mostly Claude and ChatGPT). I attribute this to two main factors:
First, and most important, I have actually started a number of projects that have only lived in my head historically. Instead of getting weighed down in “ugh I don’t want to write a PDF parser to ingest that data” or whatever, my attitude has become “well, why not see if an AI assistant can do this?” Getting that sort of initial momentum for a project is huge.
Secondly, AI assistants have helped me stretch outside of my comfort zone. I don’t know SwiftUI, but it’s easy enough to ask an AI assistant to put things together and see what happens.
Both these cases refer almost necessarily to domains I’m not an expert in. And I think that’s a bigger factor in side projects than in day jobs, since in your day job, it’s more expected that you are working in an area of expertise.
Perhaps an exception is when your day job is at a startup, where everyone ends up getting stretched into domains they aren’t experts in.
Anyways, my story is, of course, just another anecdote. But I do think the step function of “would never have started without AI assistance” is a really important part of the equation.
1. Learning curve: Just like any skill there is a learning curve on how to get high quality output from an LLM.
2. The change in capabilities since recent papers were authored. I started intensively using the agentic coding tools in May. I had dabbled with them before that, but the Claude 3.7 release really changed the value proposition. Since May with the various Claude 4, 4.1 and 4.5 (and GPT-5) the utility of the agentic tools has exploded. You basically have to discard any utility measure before that inflection point, it just isn't super informative.
is it also possible that one of the side effects of this are that people driving recreationally become sometimes exceptionally good at it? see how many great f1/rally pilots Finland has generated. Clearly not good when this happens while drunk tho
Yes, I think it's definitely a factor. Recreational driving is a favorite past-time in the countryside, and due to the forest industry there are lots of dirt roads which are perfect for rally driving, many purpose-built race tracks around the country as well. So the barrier of entry is probably lower than in most places. It's also not too uncommon for kids whose parents own / have access to some land to have some old, unregistered car to practice with away from public roads.
There is even a popular racing class called "jokamiehenluokka", where drivers are obliged to sell their cars for 2000 euros if somebody makes an offer. That rule is designed to keep the barrier of entry low, as drivers don't have the incentive to invest too much into their car. Apparently you can take the exam tojoin at age of 15, which is 3 years before the normal minimum age for driving license.
I recommend the game "My Summer Car" for those interested in all this culture.
“trust arrives on foot and leaves on horseback”
seems it’s applicable to this case too. Sad to see decades of work being tore apart in a few months.
reply