How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway
I learn fastest from the examples, from application of the skill/knowledge - with explanations.
AIs allowed me to get on with Python MUCH faster than I was doing myself, and understand more of the arcane secrets of jq in 6 months than I was able in few years before.
And AIs mistakes are brilliant opportunity to debug, to analyse, and to go back to it saying "I beg you pardon, wth is this" :) pointing at the elementary mistakes you now see because you understand the flow better.
Recently I had a fantastic back and forth with Claude and one of my precious tools written in python - I was trying to understand the specifics of the particular function's behaviour, discussing typing, arguing about trade-offs and portability. The thing I really like in it that I always get a pushback or things to consider if I come up with something stupid.
It's a tailored team exercise and I'm enjoying it.
Windows APIs docs for older stuff from Win32 is extremely barebones. WinRT is better, but still can be confusing.
I think AI is really great to start with the systems programming, as you can tailor the responses to your level, ask to solve specific build issues and so on. You can also ask more obscure questions and it will at least point you at the right direction.
Apple docs are also not the best for learning, so I think as a documentation browser with auto-generating examples AI is great.
Human teachers make mistakes too. If you aren't consuming information with a skeptical eye you're not learning as effectively as you could be no matter what the source is.
The trick to learning with LLMs is to treat them as one of multiple sources of information, and work with those sources to build your own robust mental of how things work.
If you exclusively rely on official documentation you'll miss out on things that the documentation doesn't cover.
If I have to treat LLMs as a fallible source of information, why wouldn't I just go right to the source though? Having an extra step in between me and the actual truth seems pointless
If the WinAPI docs are solid you can do things like copy and paste pages of them into Claude and ask a question, rather then manually scan through them looking for the answer yourself.
Apple's developer documentation is mostly awful - try finding out how to use the sips or sandbox-exec CLI tools for example. LLMs have unlocked those for me.
If you're good at programming you can usually tell exactly why it worked or didn't work. That's how we've all worked before coding agents came along too - you don't blindly assume the snippet you pasted off StackOverflow will work, you try it and poke at it and use it to build a firm mental model of whether it's the right thing or not.
Sure. A big part of how I'd know that the function I'm calling does what I think it does, is by reading the source documentation associated with it
Does it have any threading preconditions? Any weird quirks? Any strange UB? That's stuff you can't find out just by testing. You can ask the LLM, but then you have to read the docs anyway to check its answer
Except you have no idea if what the LLM is telling you is true
I do a lot of astrophysics. Universally LLMs are wrong about nearly every astrophysics questions I've asked them - even the basic ones, in every model I've ever tested. Its terrifying that people take these at face value
For research at a PhD level, they have absolutely no idea what's going on. They just make up plausible sounding rubbish
Astrophysicist David Kipping had a podcast episode a month ago reporting that LLMs are working shockingly well for him, as well as for the faculty at the IAS.[1]
It's curious how different people come to very different conclusions about the usefulness of LLMs.
The answer it gave was totally wrong. Its not a hard question. I asked it this question again today, and some of it was right (!). This is such a low bar for basic questions
Why does it matter? We have table of contents, index and references for books and other contents. That’s a lot of navigational aid. Also they help in providing you a general overview of the domain.
Bam, that's the single source of truth right there. Microsoft's docs are pretty great
If I use an LLM, I have to ask it for the documentation about "GetQueuedCompletionStatus". Then I have to double check its output, because LLMs hallaucinate
Doubly checking its output involves googling "GetQueuedCompletionStatus", finding this page:
I have not done win32 programming in 12 years. Maybe you've done it more recently. I'll use an LLM and you look up things manually. We can see, who can build a win32 admin UI that shows a realtime view of every open file by process with sorting, filtering and search on both the files and process/command names.
I estimate this will take me 5 minutes
Would you like to race?
This mentality is fundamentally why I think AI is not that useful, it completely underscores everything that's wrong with software engineering and what makes a very poor quality senior developer
I'll write an application without AI that has to be maintained for 5 years with an ever evolving featureset, and you can write your own with AI, and see which codebase is easiest to maintain, the most productive to add new features to, and has the fewest bugs and best performance
Sure let's do it. I am pretty confident mine will be more maintainable, because I am an extremely good software engineer, AI is a powerful tool, and I use AI very effectively
I would literally claim that with AI I can work faster and produce higher quality output than any other software engineer who is not using AI. Soon that will be true for all software engineers using AI.