Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most normal people want the LLM to remember their interests and favourite things, so they don't have to manually re-explain when asking for advice.

They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once.



Do you have example prompts where this would be usual? Why would you want an LLM to know your favorite type of cheese? Now that I say that, I guess if you use it for recipes then it's useful if it remembers things like dietary restrictions. And even then a project seems like the better option.

I can't think of much else though so I'm still curious what you or others use it for.


ChatGPT knows what's in my bar and what types of base liquors I love and/or can't drink. It knows what fruit, syrups and mixes are in my fridge. It knows that my friend is allergic to mint. It knows that when I ask for recommendations, I tend to want a choice between spirit forward, tiki, martini and herbaceous.

ChatGPT knows the broad strokes of the 3-4 main hardware projects I have on the go, and depending on the questions I'm asking, it will often structure its responses in a way that differentiates based on which one I'm thinking about.

It knows what resistor and capacitor values I have on my pick and place machine, and when I ask for divider ratios it will do its best to calculate based on those values to the degree that it will chain 1-2 resistors together to achieve those ratios.

I knows what kind of solder I use, and has warned me about components with sensitive reflow temperature concerns.

It's an extraordinarily useful feature for engineering and drinking, two things that are commonly found in the same Venn diagram.


Thank you! That helped me understand. Hobbies that you regularly do, and an LLM is continuously helpful for, benefiting from memory.

Personally, I would still be wary of the black box aspect -not knowing what it does remember and what it doesn't - so I would probably still use projects to make it more deterministic. But that's probably being overcautious and unnecessary in most common cases.


> It knows what resistor and capacitor values I have on my pick and place machine, and when I ask for divider ratios it will do its best to calculate based on those values to the degree that it will chain 1-2 resistors together to achieve those ratios.

Also relevant: it knows that you know what a resistor and capacitor is, and is able to tune responses to your level of knowledge. (It's not great at this, in my experience, since domain knowledge is still so jagged, but I think it's better than nothing.)


It it just me who's getting freaked out by this?

I know it's a boiling frog situation, but seeing it spelt out feels so icky vs how google ads feel.

I really want my personality data deleted from big tech...

Sigh


Hackers: I won't give my data to Google! I'll use a niche OS on a niche rooted phone, and it's OK that my bank's app won't work.

Also hackers: that comment.


I will say graciously that seeing this question asked here is absolutely stunning to me

If I ask a question about vehicles it know what cars I have and what I like in cars

If I ask for a question about vacation spots it know my parties composition or preferences

Things like that


I asked chatgpt a car related question in a fresh chat, and it answered it specifically with my car in mind.

Turns out a few month befor I told it in a prompt what car I was driving.

I turned memory of that day.


Can projects overlap? If not there’s general context information that’s often useful.

My job, my kids and time preferences around those things, my preferred tech setup and way of working and types of tech I’m better at. Things I already have (home assistant, little nuc, etc). I can throw a random question and not have to add this kind of information or manage it.


I get that those are the things that go into memory. What I don't get is what kind of prompt your job and kids are useful information for. Especially on the regular.


Let’s see, recently:

Home automation fixing

Proposed integrations with some services locally

Science experiments explained at a few levels, finding good background info and where to read up about some safety information

Maths help for specific areas my kids are looking at and proposed games for that

Evaluation of coding options for my kids

How to link up some ideas on coding, electronics and using the home automation side as some fun outputs

LED strip info and work, again integrating with smart homes and what’s good around the kids

Framework evaluations for automation at work and home

Crystal identification

Looking up local council info

Relevant music suggestions for kids to play on the piano

Here some things cross over. I’m happy writing code, I typically want easy open source options, I have languages and tech I prefer, I’m moving g things to matter, I have home assistant, my son is excellent at maths given his age but I’m working more on comprehension of problems, and a lot more. All those are things that with a bit of background info change the types of answers I get and make it more useful.


I had the same question a few days ago here: https://news.ycombinator.com/item?id=47162828

I didn't receive an answer besides "that's what people like", but I still can't think of (m)any situations where anyone would prefer it.


The reply about knowledge about their job and familt made me think.

The only thing I can now think of is using it as a personal therapist. Or asking how to approach their kids. And they're a bit embarrassed about it, because it's still outside the Overton window -especially on HN - which is why they aren't sharing it.

If someone has different usecases, please do prove me wrong! Maybe I just lack imagination.


Such an incredible amount of personal, intimate knowledge to share with a company. Sure, Google can figure out where I live and who I visit because I have an Android phone, but they'll never know the contents of those relationships.

I have a line in the sand with the AI vendors. It's a work relationship. If I wouldn't share it with a colleague I didn't know super well, I'm not telling it to a AI vendor.


I recently asked about baby-led weaning. If my baby were 2 months old, it would have been smart to mention "not yet!" but it knows she's 8 months old and was able to give contextual advice.

I ask gpt a lot of questions about plants and gardening - I’m happy that it remembers where I live and understands the implications. I could remind it in every question, but this is convenient.


I was redoing our agency's website and thinking about new sections. Claude already knows who I am and what do we do, so it was able to offer extremely relevant suggestions based on this without any further prompting.

In my personal experience the memory in Claude works much better than in ChatGPT where it indeed feels forced and leads to "remember the user loves cheese" moments.


I use it for my work. So i went it to remember everything about my business, website, the domain, which country we operate and on and on. It’s a ton of context which I don’t want to repeat each time.


That's what projects are for. All the major chatbot companies have some equivalent and all support a standard instruction where you can include anything you need automatically.


I broke my ankle and have multiple chats related to medicine, physical therapy, pain management, lawyer questions, how to handle messaging to boss and HR


Sure.

ChatGPT "knows" (has context that includes) some of the things I'm good at, and some of the things I'm not good at. I have my own tolerances for communication and it has context about that, too.

I use the bot for mostly techy things. So, for instance, I'm alright with using tools, and building electronics, and punting around on a Linux box so I don't need my hand held for that. But I'm terrible at writing code, so baby steps and detailed explanation there helps me a lot. I strongly prefer pragmatism and verifiable facts. I despise sycophant speech, the empty positivity of corpo-speak, assumptions, false praise, superfluous verbosity, and apologies and/or the implication of feelings from bots.

Through a combination of some deliberate training (custom instructions, memory), and just using it (shared context), it mostly does what I want in the way that I want it done -- the first time.

I don't have to steer in the right direction with every new session. There was a time when that was necessary, but it is no longer that way. Adjustments happen increasingly automatically these days.

That saves me time and frustration, and enhances the utility of the bot.

Meanwhile: Others have their own skills and preferences that may be very different in comparison to my own. That's OK. We each get to have our own experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: