I've been hesitant lately to dedicate a lot of time to learning how to perfect prompts. It appears every new version, not to mention different LLMs (Google's here [1]), responds differently. With the rapid advancement we're seeing, in two year or five, we might not even need such complex prompting as systems get smarter.
I know I'm the dummy here because people are doing useful stuff with these techniques, but I don't think I'll ever shake the feeling that this can't possibly be the way forward, that it can only possibly be a short-lived local maximum.
That’s a good point. I remember in elementary school, our librarians were teaching us how to find information online using some really technical search engine. Maybe called like jstor or something else? They hated google and said you can’t properly filter with it the way you could with theirs. But years later I obviously never heard of the previous one again and used google regularly
[1] : https://ai.google.dev/docs/prompt_intro