Fwiw, copilot is not a particularly powerful LLM. It's at most glorified smarter autocomplete. I personally use LLMs for coding a lot, but Copilot is not really what I'd have in mind saying that.
Rather, I'd be using something like the Zed editor with its AI Assistant integration and Claude Sonnet 3.5 as the model, where I first provide it context in the chat window (relevant files, pages, database schema, documents it should reference and know) and possibly discuss the problem with it briefly, and only then (with all of that as context in the prompt) do I ask it to author/edit a piece of code (via the inline assist feature, which "sees" the current chat).
But it generally is the most useful for "I know exactly what I want to write or change, but it'll take me 30 minutes to do so, while with the LLM I can do the same in 5 minutes". They're also quite good at "tell me edge-cases I might have not considered in this code" - even if 80% of the suggestions it'll list are likely irrelevant, it'll often come up with something you might've not thought about.
There's definitely problems they're worse than useless at, though.
Where more complex reasoning is warranted, OpenAI o1 series of models can be quite decent, but it's hit or miss, and with the above prompt sizes you're looking at 1-2$ per query.
Rather, I'd be using something like the Zed editor with its AI Assistant integration and Claude Sonnet 3.5 as the model, where I first provide it context in the chat window (relevant files, pages, database schema, documents it should reference and know) and possibly discuss the problem with it briefly, and only then (with all of that as context in the prompt) do I ask it to author/edit a piece of code (via the inline assist feature, which "sees" the current chat).
But it generally is the most useful for "I know exactly what I want to write or change, but it'll take me 30 minutes to do so, while with the LLM I can do the same in 5 minutes". They're also quite good at "tell me edge-cases I might have not considered in this code" - even if 80% of the suggestions it'll list are likely irrelevant, it'll often come up with something you might've not thought about.
There's definitely problems they're worse than useless at, though.
Where more complex reasoning is warranted, OpenAI o1 series of models can be quite decent, but it's hit or miss, and with the above prompt sizes you're looking at 1-2$ per query.