Hacker Newsnew | past | comments | ask | show | jobs | submit | _QrE's commentslogin

Maybe I'm misunderstanding something, but you can add filters to RSS feeds. What is proposed is pretty much just RSS, except for one specific item. Yes, it's more work on your side, but asking the creator to manage updates for whatever one thing any/every random person is interested in is pretty unrealistic, especially since the people asking for this are going to be explicitly not interested in everything else about the creator.

> There’s no AI to this. No magic. No problems to be solved.

Why would you not involve yourself in the new hotness? You _can_ put AI into this. Instead of using some expression to figure out whether a new article has links to the previous ones in the series / a matching title, you can have a local agent check your RSS feed and tell you if it's what you're looking for, or else delete the article. For certain creators this might even be a sensible choice, depending on how purple their prose is and their preferred website setup.


> Yes, it's more work on your side

How much work, and is Part 3 gonna be so mindblowing to be worth it?

> asking the creator to manage updates for whatever

Managing updates in this case is...posting Part 3? Something they were already gonna do? Except now there's also some machine-only endpoint that needs to start returning "Yes" instead of "No"? Doesn't sound like a ton of work.

> Why would you not involve yourself in the new hotness? You _can_ put AI into this.

Because just involving yourself with the new hotness just because it is the new hotness is pathetic. I can put AI into this, but why would I? Why would I add all the heft and complexity and stupid natural language bullshit talking to a computer when I could just press a button that will do this for me deterministically?


How is this different than just curling the endpoint? It seems like you might be asking the producer to be able to execute any arbitrary calculation across their (codebase?, website? Thing?). The reason it’s never been implemented is because it’s impossible


Neat, but isn't packing all this stuff into a bash script overkill? You can pretty easily install and configure some good tools (i.e. crowdsec, rkhunter, ssh tarpit or whatever) to cover each of the categories rather than have a bunch of half-measures.

Also, you're calling this TheProtector, but internally it seems to be called ghost sentinel?

> local update_url="https://raw[dot]githubusercontent[dot]com/your-repo/ghost-se..."


I'm not sure how valid most of these points are. A lot of the latency in an agentic system is going to be the calls to the LLM(s).

From the article: """ Agents typically have a number of shared characteristics when they start to scale (read: have actual users):

    They are long-running — anywhere from seconds to minutes to hours.
    Each execution is expensive — not just the LLM calls, but the nature of the agent is to replace something that would typically require a human operator. Development environments, browser infrastructure, large document processing — these all cost $$$.
    They often involve input from a user (or another agent!) at some point in their execution cycle.
    They spend a lot of time awaiting i/o or a human.
"""

No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).

I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.

> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...

Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.


That's true from a performance perspective but, in building an agent in Go, I was thankful that I had extremely well-worn patterns to manage concurrency, backlogs, and backpressure given that most interactions will involve one or more transactions with a remote service that takes several seconds to respond.

(I think you can effectively write an agent in any language and I think Javascript is probably the most popular choice. Now, generating code, regardless of whether it's an agent or a CLI tool or a server --- there, I think Go and LLM have a particularly nice chemistry.)


Agents are the orchestration layer, i.e. a perfect fit for Go (or Erlang, or Node). You don't need a "mountain of AI-related libraries" for them, particularly given the fact that what we call an agent now has only existed for less than 2 years. Anything doing serious IO should be abstracted behind a tool interface that can (and should) be implemented in whatever domain specific tooling is required.


I wouldn’t underestimate the impact of having massive communities around a language. Basically any problem you have has likely already been solved by 10 other people. With AI being as frothy as it is, that’s incredibly valuable.

Take for example something like being able to easily swap models, in Python it’s trivial with litellm. In niche languages you’re lucky to even have an official, well mantained SDK.


And I wouldn't overestimate a language's popularity. It's mostly a social phenomenon and rarely has anything to do with technical prowess.

I agree that integration with the separate LLMs / agents can and does accelerate initial development. But once you write the integration tooling in your language of choice -- likely a few weeks worth of work -- then it will all come down to competing on good orchestration.

Your parent poster is right: languages like Erlang / Elixir or Golang (or maybe Rust as well) are better-equipped.


Go still has a much better concurrency story. It’s also much less of a headache to deploy since all you need to deploy is a static binary and not a whole bespoke Python runtime with every pip dependency.


Go is definitely better, but with uv you can install all dependencies including python with only curl


Is that what uv sync does under the hood, just curl’s over all dependencies and the python version defined in .python-version?


I think they meant you can use curl to install uv and then you don't need to (manually) install anything else


Yeah that’s what I meant, apologies if unclear


Easy example is Stripe. You can enable 3DS, and you can listen for 'early_fraud_warning' events on a webhook to refund users & close accounts to avoid chargebacks and all the associated fees and reputation penalties.


If I am in a call with someone using Nomi, can I send a message in the call or wherever to disable it, or will I have to ask the person using it to turn it off?


I agree.

> "The real challenge in traditional vector search isn't just poor re-ranking; it's weak initial retrieval. If the first layer of results misses the right signals, no amount of re-sorting will fix it. That's where Superlinked changes the game."

Currently a lot of RAG pipelines use the BM25 algorithm for retrieval, which is very good. You then use an agent to rerank stuff only after you've got your top 5-25 results, which is not that slow or expensive, if you've done a good job with your chunking. Using metadata is also not really a 'new' approach (well, in LLM time at least) - it's more about what metadata you use and how you use them.


If this were true, and initial candidate retrieval were a solved problem, teams where search is revenue aligned wouldn't have teams of very well paid people looking for marginal improvement here.

Treating BM25 as a silver bullet is just as strange as treating vector search as the "true way" to solve retrieval.


I don't mean to imply that it's a solved problem; all I'm saying is that in a lot of cases, the "weak initial retrieval" assertion stated by the article is not true. And if you can get a long way using what has now become the industry standard, there's not really a case to be made that BM25 is bad/unsuited, unless the improvement you gain from something more complex is more than just marginal.


one thing to remember is that bm25 is purely in the domain of text - the moment any other signal enters in the picture (and it ~always does in sufficiently important systems), bm25 alone can literally have 0 recall.


It sounds like you might be looking at the wrong place. There's services like bunny.net and cloudflare CDN (I'm not affiliated with either, but I use the former) that are really easy to set up and configure, if you've built your site properly (edit for clarification: if you have clearly defined static content, and/or you're using some build system). You don't want to 'run' a CDN, you want to use one.

Configuration depends a lot on the specifics of your stack. For Svelte, you can use an adapter (https://svelte.dev/docs/kit/adapters) that will handle pointing to the CDN for you.

Cloudflare's offering is free, bunny.net is also probably going to be free for you if you don't have much traffic. CDNs are great insurance for static sites, because they can handle all the traffic you could possibly get without breaking a sweat.


Bunny is never free (other than a 14 day free trial). But it is dirt cheap but with a min cost of 1 usd/month.


You're right, my bad. I was looking at the CDN tab and it rounds the traffic cost to 0.


This. Normal AirTags are just fine for tracking your stuff.

> "(thiefs use apps to locate AirTags around, and AirTags will warn the thief if an unknown AirTag is travelling with them, for example if they steal your car)"

The reason this was introduced is exactly because people used AirTags to stalk others. Advertising that your product turns that off is basically targeting that specific demographic.


That's true. But the advertised use case (tracking stolen items) is perfectly valid.


[flagged]


Hydraulic presses are sold and they'll do a marvelous job at crushing orphans.


yup. it unfortunately works well in the gun industry


And you can use hammers to brutally murder people as well as to drive in nails. You can use a screwdriver to grievously wound someone besides using it to repair your glasses. The fact that a tool can be used for bad things does not negate the good things it can be used for. Nor does it mean that the maker is responsible if someone chooses to use it for bad things.


'the gun' by cj chivers is an excellent read, i suggest checking it out


FWIW the vast majority of guns sold to civilians never get discharged at anything other than pieces of paper.

(And quite a few never get used at all - "safe queen" is a well-established term for a reason.)


A machine that crushes orphrans[sic]? Shut up and take my money.


> The reason this was introduced is exactly because people used AirTags to stalk others.

There were anti-stalking features from the start. It didn’t stop the media hysteria however.


The index page is a great way to capture emails I'm sure, but I'd like to get an idea of what the product actually does. Regardless, this seems like it could be useful, not just as a way to get a playbook, but also to see how others see your company from the outside.

Small nitpicks: I've registered for a project I'm currently working on (temprary.com), and it seemed to get stuck. Then it refreshed, and I saw an empty page with just the plays / context / powerplays header. I had to click on the 'continue signup' link on one of the emails to have the analysis progress. I've also been sent 3 emails upon registration; 2 of them 'autocorrected' the company name.

From looking at the results, it seems like it makes interesting suggestions, but they feel very 'cookie-cutter', and a little bit off. As an example, it tells me to list the app in the GitHub marketplace (well, something to that effect, at least), and it then also suggests: "Deploy a bot in developer forums to automatically engage with users discussing temporary code management, answering queries, and suggesting Temprary as a solution." - maybe a way to tell the bot to not suggest stuff like that would be good.

I kind of expected a bit more, from the description. I haven't set up social media accounts for Temprary - are you not going to tell me about that, or that I should put some blog posts for SEO, or something else to that effect? Playmaker correctly identifies that there's limited info available about Temprary, and it does a pretty good job of identifying the target market, but it feels like it stops a little bit short of being useful. Are you not going to tell me good ways to build a list for the customer profile that you've identified?

I do however like how everything is presented. I feel like if you give more context to the LLM, and you feed it more info about GTM, Marketing etc, it could come up with better suggestions, and actually present something novel and actionable. I'd be looking at either automating the plans I already have, or finding out some approach that I haven't thought of before; you can expect that I've already asked an LLM about GTM strategies before coming to you. So 'Powerplays' sound interesting, if you could adapt the playbooks and expertise of professionals to whatever I am currently building.

Best of luck.


> I ended up turning that process into its own product: https://sparkflow.ai/

That implies that the product is ready, which it does not seem to be? The product does seem interesting, what do you do to differentiate yourself from different offerings that do basically the same? E.g. you mention Discord on the website - are you finding all the relevant discord channels for me, or do I have to join them myself, and you just monitor my account?


Hi. I should have phrased it differently as I'm not totally ready yet (as you rightly pointed out).

Yes, I find the discussions, let you engage with them trough the app, and monitor replies.

Do you mind sharing the competition you're referring to? I've not found any similar solution so far. Thanks


Not sure if I'm allowed to link to stuff, but you can find a bunch of alternatives to offerings like these (the most basic being google alerts), both free and paid, if you look for them - especially on sites that exist to list alternatives.

Some let you link accounts, some give you lists of platforms they scan. I've specifically mentioned Discord because there's no easy way to discover communities without being invited, so if you can actually discover relevant Discord communities and monitor discussions (beyond the few sites that aggregate some popular communities), then you have a USP, since most products don't do that, at least in any capacity that people think is worth paying for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: