Hacker Newsnew | past | comments | ask | show | jobs | submit | asaddhamani's commentslogin

The iOS compass redesign is particularly egregious

I think you meant the Measure app, but yes. I hate it so much.

yes thats the one

Call me cynical but I think designers need to occasionally break things that were already solved long ago to justify their continued relevance. Explains a lot of redesigns that make things only worse, reshuffling interfaces, hiding things behind menus in form over function redesigns, etc.

Non-tech people tend to think similarly about developers, breaking things that worked fine until yesterday / last week / last month, for no user-visible benefit.

Sometimes that's true.

The exact same thing applies to development. How many things broke because of a React rewrite or moving to micro services?

I wish the average dev would recognise this.


MemoryPlugin (https://www.memoryplugin.com)

Long term memory for dozens of AI tools, designed for power users who want more control and flexibility than native memory systems and who do not want to be locked into any one platform. You can also have the system remember your entire chat history going back years and use this information to help you better in new chats, it sometimes makes chats 10x more useful when I say something like: “Using recall tool, do 10+ calls for 1000 tokens context each to learn about my interests, strengths, curiosities, what I’ve tried in the past, what worked, what didn’t, etc and suggest a new hobby I would enjoy”.

Without long term recall, AI is a super intelligence in your hands that uses the knowledge of the world to give you generic, nearly useless advice because of how generic it is. With long term memory, you have a super-intelligence that knows YOU. This is what MemoryPlugin solves for.


I love Tower and have paid for it for years. I can’t imagine using the git CLI now. GUIs were invented for a reason and the git CLI has terrible ergonomics and many ways to make costly mistakes.


I’ve been working on MemoryPlugin (https://www.memoryplugin.com), a tool that adds long term memory across AI tools

Lately I’ve worked on a chat history based memory feature that can recall information from every conversation you’ve ever had with ChatGPT and Claude. It’s been particularly useful and also technically fun to implement. Speed has been very important as I do just in time summarisation and a multi stage RAG pipeline, and most LLMs have unacceptable performance. I ended up going with GPT-OSS on Groq due to its ultra low latency often completing full generations before Gemini or ChatGPT APIs return even the first token.

The ability to recall details from conversations going back years makes tasks where I want personalised plans or feedback like 10x more useful, at times I get the AI to ingest tens of thousands of tokens of context to help me better.


I find the python tooling so confusing now. There’s pip, virtualenv, pipx, uv, probably half a dozen others I’m missing. I like node, npm isolates by default, npx is easy to understand, and the ecosystem is much less fragmented. I see a python app on GitHub and they’re all listing different package management tools. Reminds me of that competing standards xkcd.


Node has at least bun, and probably other tools, that attempt to speed things up in similar ways. New tooling is always coming for our languages of choice, even if we aren't paying attention.


> There’s pip, virtualenv, pipx, uv, probably half a dozen others I’m missing...

> Reminds me of that competing standards xkcd.

Yes, for years I've sat on the sidelines avoiding the fragmented Poetry, ppyenv, pipenv, pipx, pip-tools/pip-compile, rye, etc, but uv does now finally seem to be the all-in-one solution that seems to be succeeding where other tools have failed.


> I see a python app on GitHub and they’re all listing different package management tools.

In general, you can use your preferred package management tool with their code. The developers are just showing you their own workflow, typically.


well there's npm, pnpm, yarn, bun package managers

not a python developer, so not sure it's equivalent as the npm registry is shared between all.


I've been working on a tool to add long term memory to various AI tools. It started last year as a small scratch-your-own-itch side project. After using ChatGPT Plus for a month, I went back to TypingMind, my go-to AI client at the time, but I really missed the memory feature and wanted it there. So I made a simple memory plugin for it.

Over time, the project has grown to now support more than 17 platforms, thousands of users, and has been growing organically.

As of most recently a major feature I've been working on is full chat history based memory. Being able to remember and recall every conversation you've ever had across multiple supported AI tools, similar to the reference past conversations feature in various AI apps. This has been pretty intense and fun. Ingesting tens of millions of tokens per user, and doing complex multi-stage RAG on-the-fly across this vast dataset with a tight latency target for UX.

The project is MemoryPlugin: https://www.memoryplugin.com

Another project is a RAG app that's built specifically for books. No "we work with your receipts, and legal documents, instruction manuals, product documentation, lecture transcripts, your dogs novel, the script for a play and everything else possible". I wanted something tailored for books, specifically, non-fiction books. When you try to work everywhere, you can not deliver an amazing experience for any one specific use case. AskLibrary is tailored for non-fiction books, so everything from the answer generation pipelines, to the ingestion pipeline, and various other features are all designed for this specific use case. https://www.asklibrary.ai


I think a few of the things you’ve mentioned in the ChatGPT article are hallucinations. There’s no user interaction metadata about topics, average message length etc., you asked the AI and it gave you a plausible sounding answer. Also the memories / snippets of past conversations aren’t based on like last 30 conversations or so and they aren’t provided to every message. They are doing some kind of RAG prompt injection and they remove the injected context in the next message to not flood the context window. The AI itself seems to have no control over what’s injected and when, it’s a separate subsystem doing that injection.


In my experience, Qwen 3 coder has been very good for agentic coding with Cline. I tried DeepSeek v3.1 and wasn't pleased with it.


Claude now does have a weekly limit so if you are able to hit your weekly (undisclosed, dynamic) limit in 2 days, you're unable to use the services for the next 5 days. That is what Cerebras is referencing with no weekly limits. Claude has session count limits, dynamic limits within each session, and now weekly limits on top of all that.


Please read my full comment.

Cerebras is jumping on a marketing faux-pas by Anthropic. I say this for the point you bring up about monthly session limits - no one on the Claude subreddit has yet to report being hit by this despite many going way over that. These are checks to deal w/ abusive accounts.


> no one on the Claude subreddit has yet to report being hit by this despite many going way over that

Because it hasn't gone into effect yet: "From August 28, we’ll introduce new weekly limits that’ll mitigate these problems while impacting as few customers as possible." [0]

[0] https://xcancel.com/AnthropicAI/status/1949898514844307953#m


I already explained why it's not a legitimate market differentiation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: