Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a personal writing app that uses the OpenAI models and this post is bang on. One of my learnings related to "Lesson 1: When it comes to prompts, less is more":

I was trying to build an intelligent search feature for my notes and asking ChatGPT to return structured JSON data. For example, I wanted to ask "give me all my notes that mention Haskell in the last 2 years that are marked as draft", and let Chat GPT figure out what to return. This only worked some of the time. Instead, I put my data in a SQLite database, sent ChatGPT the schema, and asked it to write a query to return what I wanted. That has worked much better.



This seems like something that would be better suited by a database and good search filters rather than an LLM...


I setup a search engine to feed to a rag setup a while back. At the end of the day, I took out the LLM and just used the search engine. That was where the value turned out to be.


Something something about everything looking like a nail when you’re holding a hammer


Have you tried response_format=json_object?

I had better luck with function-calling to get a structured response, but it is more limiting than just getting a JSON body.


I haven't tried response_format, I'll give that a shot. I've had issues with function calling. Sometimes it works, sometimes it just returns random Python code.


Using the openai Python library?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: