Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting. I’ve been playing with something similar, at the coding agent harness message sequence level (memory, I guess). I’m looking at human driven UX for compaction and resolving/pruning dead ends


Human-driven compaction is interesting — you sidestep the "what's worth keeping" problem by putting a person in the loop. The tradeoff I've hit is that agents running autonomously need it to happen automatically or coherence degrades fast between sessions.

For pruning we landed on a last-touched timestamp + recall frequency counter per memory. Things not accessed in N sessions that were weakly formed to begin with get soft-deleted. Human review before hard delete is probably better UX if your setup allows it.

Curious what "dead ends" look like in yours; conversational chains that didn't resolve, or factual ones?


> The tradeoff I've hit is that agents running autonomously need it to happen automatically or coherence degrades fast between sessions.

Yeah that makes total sense. I wonder (and am sure the labs are doing so) if the HitL output would be good to fine tune the models used to do it autonomously?

I’m sticking with humans for the moment because I’m not sure where the boundaries lie: what actually makes it better and what makes it worse. It’s non obvious so far

Pruning “loops” has been pretty effective though, where a model gets stuck over N turns checking the same thing over and over and not breaking out of it til way later. That has been good because it gives strong context size benefits, but is also the most automatable I think

Pruning factually incorrect turns is something I’m trying, and pruning “correct” but “not correct based on my style” as well. Building a dataset of it all is fun :)


> I’m sticking with humans for the moment Haha totally get this statement.

The HitL fine-tuning angle is exactly right. The labeled dataset you're building (good/bad/stylistically-wrong memory events) is probably worth more than the compaction itself. Coherence preferences are surprisingly personal — what reads as "not correct based on my style" is hard to spec without examples.

The loop-pruning maps really cleanly to the contradiction detection in our setup. A model circling the same state N times is often because it stored an inconclusive result with the same confidence as a resolved one they look identical at recall time. Tagging memory entries with a status [open, resolved, or contradicted] before they go in cuts a lot of that.

On the autonomy question: we ended up treating certainty as continuous rather than binary. Low-certainty memories stay soft, high-certainty ones get promoted. Automatic compaction only operates on the low end, higher certainty entries are off-limits without explicit override. That lets you keep the autonomy without the coherence risk. The failure mode shifts from "deleted something important" to "kept something stale too long," which feels more recoverable.

Would be curious what your pruning signal looks like at the turn level — are you scoring relevance per-turn retroactively, or flagging at write time?


Semi-retroactively: my agent has a command to /compact and its then that I pop the interface. It gets opened automatically if the context is full, too, and then I've gone back and fed some recorded sessions into it as well days later too, to test things out. Still getting the hang of it, but I won't be surprised to see much bigger teams/companies do something similar (I assume they are already, really)


The /compact trigger is a clean pattern — agent-initiated but human-confirmed. Makes the interface feel more like a review than an interruption.

The retroactive feeding of recorded sessions is underrated. That's basically supervised compaction - you're labeling what mattered in hindsight, which is almost always cleaner signal than in-flight decisions.

I suspect the labs are doing something like this at scale but the hard part is that "what mattered" is user-specific. A generic compaction model trained on aggregate data probably smooths over the individual coherence preferences that make it actually useful.

We ended up open-sourcing the memory layer as an MCP server (engram-mcp) if youre interested at how we handled the certainty/recall side.

Interested in what your session recordings look like structurally or are they raw transcripts or do you extract structure before feeding them in?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: