>I kinda feel like this is a self-placating statement that is not going to stay true for that long. We are so early in the process of developing AI good enough to do any of these things. Yes, right now you need senior level design skills and programming knowledge, but that doesn't mean that will stay true.
So you really think that in a few years some guy with no coding experience will ask the AI "Make me a GTA 6 clone that happens in Europe" and the AI will make actually make it, the code will just work and the performance will be excellent ?
The LLMs can't do that, they are attracted to solutions they seen in their training, this means sometimes they over complicate things, they do not see clever solutions, or apply theory and sometimes they are just stupid and hallucinate variable names and functions , like say 50% of the time it would use speed and 50% of the time it would use velocity and hte code will fail because undefined stuff.
I am not afraid of LLMs taking my job, I am afraid of bullshit marketing that convinces the CEO/management that if they buy me Claude then I must work 10x faster.
And, somewhat besides the point, generative AI is getting better at a lot of those things as well. Maybe I want to believe this will happen because it's probably the only way to get a sequel to Sleeping Dogs.
art, design, testing, producer stuff, composer, sound designer writer. Then product and creative manager, art and technical director and thet all the rest. And at the end game can be no fun at all.
Also gta has thousands bugs still, cant imagine how many AI would make and if you could solve to actually complete massive project like that
> So you really think that in a few years some guy with no coding experience will ask the AI "Make me a GTA 6 clone that happens in Europe" and the AI will make actually make it, the code will just work and the performance will be excellent ?
There is definitely a path from here to the future where the most senior engineer in your org/dept/team decides he can make some big project without some subset of more-junior employees because he has Claude. The managers or PMs won’t be coding without engineers, but it’s definitely possible for engineers to code with less teammates, especially if the very experienced ones are the ones planning and guiding the effort.
> The LLMs can't do that, they are attracted to solutions they seen in their training, this means…
None of the things you’ve said this means match my experience using LLMs to write real, usable, viable code. It might not be the most performant or perfect code, but it’s certainly usable and most software isn’t written at Google or whatever and don’t need to support hundreds of millions of customers at scale. If it took a day instead of a month, then “the business” might decide that’s a worthy tradeoff.
>None of the things you’ve said this means match my experience using LLMs to write real, usable, viable code. It might not be the most performant or perfect code, but it’s certainly usable and most software isn’t written at Google or whatever and don’t need to support hundreds of millions of customers at scale. If it took a day instead of a month, then “the business” might decide that’s a worthy tradeoff.
It depends on your project, I seen a lot of stupidity in the AI, like in a lua project where arrays were 1 indexed it would 0 index them, somehow the c like behaviour was too strong of a force to drag the model in that direction.
For example when i test an image generator I ask it to create a photo of the front of a book store and to include no brands, labels or texts (because they always include english text and most of the time there are spelling errors), but the AIs can't make a shop without the branding/text above teh door, they are just so over trained on this concept that explicti commands can't fix it,
so the same with LLMs, they are attracted to the average most popular shit they seen in the training data, so without instructions by you or maybe by the provider behind the hidden prompts it will output outdated javascript using "var" . it will output unoptimized algorithms, and even if you used a specific variable name it will be strongly be pushed to rename it to whatever is most popular in the training data.
Yes, I can make the LLMs write soem good code but ony if I baby sit it, tell it exactly what files to read as inspiration, what features to use and what to do,
for sure I can't just paste the text in a ticket and let if free.
I also use it to review my code for bugs, it can find up to 5-% of the bugs and halucinate others that are not possible (like it would sugerate that if $x is null then something would crash and I should check for that, but the type system would already ensure $x can't be null so it really needs more training to do simple stuff... to be original and not just regurgitate the most popular things it was trained on it would need to be something not based on LLM architecture
What if took 1 day and 1 month later, prod is on fire and no ones know why? Speed does not equate quality. And based on my experience, after 1.0, discussions about the features take more time than coding them. Especially with paying customers.
> So you really think that in a few years some guy with no coding experience will ask the AI "Make me a GTA 6 clone that happens in Europe" and the AI will make actually make it, the code will just work and the performance will be excellent ?
I don't know the answer, as much as anyone else, and obviously I'm skeptical that it'll happen.
But then if I think back to 2018, and imagine what I would think if I saw even GPT-OSS-20b back then, it would have been close to magic and absolutely not something I would have expect. I felt the same about GPT2 when it first launched too, when LLMs started to show small bit of promise. GPT3 was insane even when it launched.
So I guess I wouldn't base "what could happen in the future" based on what I personally believe is possible, because LLMs definitely fell into that camp just a few years ago, so why not with larger coding tasks too, which I see as unlikely today?
So you really think that in a few years some guy with no coding experience will ask the AI "Make me a GTA 6 clone that happens in Europe" and the AI will make actually make it, the code will just work and the performance will be excellent ?
The LLMs can't do that, they are attracted to solutions they seen in their training, this means sometimes they over complicate things, they do not see clever solutions, or apply theory and sometimes they are just stupid and hallucinate variable names and functions , like say 50% of the time it would use speed and 50% of the time it would use velocity and hte code will fail because undefined stuff.
I am not afraid of LLMs taking my job, I am afraid of bullshit marketing that convinces the CEO/management that if they buy me Claude then I must work 10x faster.