Hacker Newsnew | past | comments | ask | show | jobs | submit | warthog's commentslogin

what would be the difference of this vs using an API like Apollo?

Pretty similar except maybe you’ll get lots more nulls judging by the other comments! Cheaper but nulls. Will need to work on the recall a bit. But also potentially based on use case feedback maybe look at other niches and features

This looks awesome. Although from a UX perspective might not be as good as streaming token by token for text generation use cases. However for image gen and editing - 100%


Built an in-browser AI managed spreadsheet. https://banker.so/

Too many things I wanted to analyse went to nothing because I was too lazy to fetch information and the put it inside a spreadsheet cell by cell. So I developed this to help me extract docs into spreadsheets while also having access to the web.

Now working on a vibe-coded version of it where instead of showing a spreadsheet, it will be able to generate data-focused tiles and apps


Tough day to be an AI Excel add-in startup


Ask Rosie is actually shutting down right now: https://www.askrosie.ai/

I would love to learn more about their challenges as I have been working on an Excel AI add-in for quite some time and have followed Ask Rosie from almost their start.

That they now gone through the whole cycle worries me I‘m too slow as a solo building on the side in these fast paced times.


That seems to be true for any startup that offers a wrapper to existing AIs rather than an AI on their own. The lucky ones might be bought but many if not most of them will perish trying to compete with companies that actually create AI models and companies large enough to integrate their own wrappers.


Actually just wrote about this: https://aimode.substack.com/p/openai-is-below-above-and-arou...

not sure if it binary like that but as startups we will probably collect the scraps leftover indeed instead


its a great time for your ai excel add-in to start getting acquired by a claude competitor though


Not OpenAI, though, because they already gave $14M to an AI Excel add-in startup (Endex)


sorry to hear that. curious if the product's approach was a fuck you to Workday though as in you can't even put them side by side and compare (they are so different) or if it was simply that Workday sucks, we will do better


Paul Graham also has a good way to frame this which perhaps I should have touched upon.

"A principle for taking advantage of thresholds has to include a test to ensure the game is worth playing. Here's one that does: if you come across something that's mediocre yet still popular, it could be a good idea to replace it. For example, if a company makes a product that people dislike yet still buy, then presumably they'd buy a better alternative if you made one."


I definitely wrote it by hand, no LLM used if that is what you are insinuating.

Uber might have turned to the bad practices used by taxis now that they are focused on extracting more and more value. However, the point of the writing was to be focused on earlier days. In the earlier days, they did try and embodied price transparency and customer experience focused on customers.

If you are arguing that they never did, I don't see how it grew to a $200 Bn company.


They did it because they were losing money to capture market.

The smart places knew it and regulated uber or killed it and keep local taxis working

Kind of like a country can subsidize car industry export tons of cars and kill domestic production in another country. Then they can jack up prices and profit on their terms.

They don't win because their cars are better. They win because they are lying by price

Same with closedai and friends now.


what do you mean by regulations? Those that benefitted Uber or?


Taxis. This is obvious, having read the blog and post together.


imo it would be better to carry the whole memory outside of the inference time where you could use an LLM as a judge to track the output of the chat and the prompts submitted

it would sort of work like grammarly itself and you can use it to metaprompt

i find all the memory tooling, even native ones on claude and chatgpt to be too intrusive


I've been building exactly this. Currently a beta feature in my existing product. Can I reach out to you for your feedback on metaprompting/grammarly aspect of it?


yep hit me up


Totally get what you're saying! Having Claude manually call memory tools mid-conversation does feel intrusive, I agree with that, especially since you need to keep saying Yes to the tool access.

Your approach is actually really interesting, like a background process watching the conversation and deciding what's worth remembering. More passive, less in-your-face.

I thought about this too. The tradeoff I made:

Your approach (judge/watcher): - Pro: Zero interruption to conversation flow - Pro: Can use cheaper model for the judge - Con: Claude doesn't know what's in memory when responding - Con: Memory happens after the fact

Tool-based (current Recall): - Pro: Claude actively uses memory while thinking - Pro: Can retrieve relevant context mid-response - Con: Yeah, it's intrusive sometimes

Honestly both have merit. You could even do both, background judge for auto-capture, tools when Claude needs to look something up.

The Grammarly analogy is spot on. Passive monitoring vs active participation.

Have you built something with the judge pattern? I'd be curious how well it works for deciding what's memorable vs noise.

Maybe Recall needs a "passive mode" option where it just watches and suggests memories instead of Claude actively storing them. That's a cool idea.


Is this the/a agent model routing problem? Which agent or subagent has context precedence?

jj autocommits when the working copy changes, and you can manually stage against @-: https://news.ycombinator.com/item?id=44644820

OpenCog differentiates between Experiential and Episodic memory; and various processes rewrite a hypergraph stored in RAM in AtomSpace. I don't remember how the STM/LTM limit is handled in OpenCog.

So the MRU/MFU knapsack problem and more predictable primacy/recency bias because context length limits and context compaction?


OpenCogPrime:EconomicAttentionAllocation: https://wiki.opencog.org/w/OpenCogPrime:EconomicAttentionAll... :

> Economic Attention Allocation (ECAN) was an OpenCog subsystem intended to control attentional focus during reasoning. The idea was to allocate attention as a scarce resource (thus, "economic") which would then be used to "fund" some specific train of thought. This system is no longer maintained; it is one of the OpenCog Fossils.

(Smart contracts require funds to execute (redundantly and with consensus), and there there are scarce resources).

Now there's ProxyNode and there are StorageNode implementations, but Agent is not yet reimplemented in OpenCog?

ProxyNode implementers: ReadThruProxy, WriteThruProxy, SequentialReadProxy, ReadWriteProxy, CachingProxy

StorageNode > Implementations: https://wiki.opencog.org/w/StorageNode#Implementations


i think it is too difficult to manage with little upside. setting it up is easy but managing the nodes etc is a huge overhead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: