Yeah, this is the main problem. Writing of code just isn't the bottle neck. It's the discovery of the business case that is the hard part. And if you don't know what it is, you can't prompt your way out of it.
We've been having a go around with corporate leadership at my company about "AI is going to solve our problems". Dude, you don't even know what our problems are. How are you going to prompt the AI to analyze a 300 page PDF on budget policy when you can't even tell me how you read a 300 page PDF with your eyes to analyze the budget policy.
I'm tempted to give them what they want: just a chatter box they can ask, "analyze this budget policy for me", just so I can see the looks on their faces when it spits out five poorly written paragraphs full of niceties that talk its way around ever doing any analysis.
I don't know, maybe I'm too much of a perfectionist. Maybe I'm the problem because I value getting the right answer rather than just spitting out reams of text nobody is ever going to read anyway. Maybe it's better to send the client a bill and hope they are using their own AIs to evaluate the work rather than reading it themselves? Who would ever think we were intentionally engaging in Fraud, Waste, and Abuse if it was the AI that did it?
> I'm tempted to give them what they want: just a chatter box they can ask, "analyze this budget policy for me", just so I can see the looks on their faces when it spits out five poorly written paragraphs full of niceties that talk its way around ever doing any analysis.
Ah, but they'll love it.
> I don't know, maybe I'm too much of a perfectionist. Maybe I'm the problem because I value getting the right answer rather than just spitting out reams of text nobody is ever going to read anyway. Maybe it's better to send the client a bill and hope they are using their own AIs to evaluate the work rather than reading it themselves? Who would ever think we were intentionally engaging in Fraud, Waste, and Abuse if it was the AI that did it?
We're already doing all the same stuff, except today it's not the AI that's doing that, it's people. One overworked and stressed person somewhere makes for a poorly designed, buggy library, and then millions of other overworked and stressed people spend most of their time at work finding out how to cobble dozens of such poorly designed and buggy pieces of code together into something that kinda sorta works.
This is why the top management is so bullshit on AI. It's because it's a perfect fit for a model that they have already established.
I've got my own gripes about leadership, but I'm finding that even when its a goal I've set for myself, watching an AI fail at it represents a refinement of what I thought I wanted: I'm not much better than they are.
That, or its a discovery of why what I wanted is impossible and it's back to the drawing board.
It's nice to not be throwing away code that I'd otherwise have been a perfectionist about (and still thrown away).
We've been having a go around with corporate leadership at my company about "AI is going to solve our problems". Dude, you don't even know what our problems are. How are you going to prompt the AI to analyze a 300 page PDF on budget policy when you can't even tell me how you read a 300 page PDF with your eyes to analyze the budget policy.
I'm tempted to give them what they want: just a chatter box they can ask, "analyze this budget policy for me", just so I can see the looks on their faces when it spits out five poorly written paragraphs full of niceties that talk its way around ever doing any analysis.
I don't know, maybe I'm too much of a perfectionist. Maybe I'm the problem because I value getting the right answer rather than just spitting out reams of text nobody is ever going to read anyway. Maybe it's better to send the client a bill and hope they are using their own AIs to evaluate the work rather than reading it themselves? Who would ever think we were intentionally engaging in Fraud, Waste, and Abuse if it was the AI that did it?