Hacker Newsnew | past | comments | ask | show | jobs | submit | guffins's favoriteslogin

> If we allow ourselves to be seduced by the superficial similarity, we’ll end up like the moths who evolved to navigate by the light of the moon, only to find themselves drawn to—and ultimately electrocuted by—the mysterious glow of a bug zapper.

Woah, that hit hard


There's a term for this - inventing a new primitive. A primitive is a foundational abstraction that reframes how people understand and operate within a domain.

A primitive lets you create a shared language and ritual ("tweet"), compound advantages with every feature built on top, and lock in switching costs without ever saying the quiet part out loud.

The article is right that nearly every breakout startup manages to land a new one.


> I’ve tried to use my feed reader to segregate by 'frequency' before, but I haven't really given it a full trial—it still feels a bit awkward.

I'm in the middle of that myself. I have folders labelled rarely, weekly, frequent and social. Rarely and weekly I tend to read most of it, as they are the folders I open first. I only open frequent once I'm done with the others and I usually scroll through the titles and only read very few articles. Social is for mastodon and bluesky accounts, which I open when I only have 5 minutes to kill and I know I won't have time to finish reading long posts/articles.


or switching to the txt version: https://aartaka.me/select-text.txt

Add this as a favorite/bookmark:

  javascript:(function(){document.styleSheets[0].insertRule("* { user-select:text !important }", 1);})();
Extra treat: this other one allows to copy text and open the context menu in pages that are written by rats who disable it:

  javascript:['copy','cut','paste','contextmenu','selectstart'].forEach(e=>document.addEventListener(e,e=>e.stopImmediatePropagation(),true));

I know crawlies are for sure reading robots.txt because they keep getting themselves banned by my disallowed /honeytrap page which is only advertised there.

A regular "chatting" LLM is a document generator incrementally extending a story about a conversation between a human and a robot... And through that lens, I've been thinking "chain of thought" seems like basically the same thing but with a film noir styling-twist.

The LLM is trained to include an additional layer of "unspoken" text in the document, a source of continuity which substitutes for how the LLM has no other memories or goals to draw from.

"The capital of Assyria? Those were dangerous questions, especially in this kind of town. But rent was due, and the bottle in my drawer was empty. I took the case."



I made an updated version, which I called "neatocal", that allows for different options (and doesn't have the initial popup), including allowing for year changes, differing month counts, different start months, etc.

See the "Parameters" list:

https://github.com/abetusk/neatocal?tab=readme-ov-file#param...

Neatnik is a very nice project.


For context this is the original keybase guy coming back to make a workalike opensource version -

https://blog.foks.pub/posts/introducing/


I find this to be a real issue with environment variables.

I am trying to create a tool to help see exactly where and by which program any environment variable was set/exported since boot.

This is still in the conceptual phase but I'm looking into linux' ftrace to achieve this. Any ideas or pointers are welcome.


A company as big as LinkedIn should have bots continually accessing their site with unique generated passwords etc., and then be searching for those secrets in logging pipelines, bytes on disk, etc. to see where they get leaked. I know much smaller companies that do this.

Yes, it's easy to fuck up. But a responsible company implements mitigations. And LinkedIn can absolutely afford to do much more.


So sometimes I don't test these projects that much but I did this time. Here are a few thoughts:

My biggest goal was "make sure that my bottleneck is serialization or syscalls for sending to the client." Those are both things I can parallelize really well, so I could (probably) scale my way out of them vertically in a pinch.

So I tried to pick an architecture that would make that true; I evaluated a ton of different options but eventually did some napkin math and decided that a 64-million uint64 array with a single mutex was probably ok[1].

To validate that I made a script that spins up ~600 bots, has 100 of them slam 1,000,000 moves through the server as fast as possible, and has the other 500 request lots of reads. This is NOT a perfect simulation of load, but it let me take profiles of my server under a reasonable amount of load and gave me a decent sense of my bottlenecks, whether changes were good for speed, etc.

I had a plan to move from a single RWMutex to a row-locking approach with 8,000 of them. I didn't want to do this because it's more complicated and I might mess it up. So instead I just measure the number of nanos that I hold my mutex for and send that to a loki instance. This was helpful during testing (at one point my read lock time went up 10x!) but more importantly gave me a plan for what to do if prod was slow - I can look at that metric and only tweak the mutex if it's actually a problem.

I also took some free wins like using protobufs instead of JSON for websockets. I was worried about connection overhead so I moved to GET polling behind Cloudflare's cache for global resources instead of pushing them over websockets.

And then I got comfortable with the fact that I might miss something! There are plenty more measurements I could have taken (if there was money on the line I would have measured some things like "number of TCP connections sending 0 moves this server can support" but I was lazy) but...some of the joy of projects like this is the firefighting :). So I was just ready for that.

Oh and finally I consulted with some very talented systems/performance engineer friends and ran some numbers by them as a sanity check.

It looks like this was way more work than I needed to do! I think I could comfortable 25x the current load and my server would be ok. But I learned a lot and this should all make the next project faster to make :)

[1] I originally did my math wrong and modeled the 100x100 snapshots I send to clients as 10,000 reads from main memory instead of 100 copies of 100 uint64s, which lead me down a very different path... I'm not used to thinking about this stuff!


Even AWS itself does better than this, but only on some services. They send an encrypted error which you can then decrypt with admin permissions to get those details.

>Though abs() returning negative numbers is hilarious.

Math.abs(Integer.MIN_VALUE) in Java very seriously returns -2147483648, as there is no int for 2147483648.


One thing I've learned when buying a full set of appliances couple of years ago: don't read consumer reports or reviews by randos on the internet -- instead, go to industry literature, and read reports by/for service and warranty providers. They have actual hard data on the types and frequency of problems across brands and models.

But back to the main theme of the article: hell to the no was my initial attitude, and I went out of my way to make sure my appliances were as simple as possible. Still, three out of the five were "wifi-enabled" and promised a world of app-enhanced wonders. Needless to say, none of these ever even got anywhere near being set up, and I think I am lucky, all the normal, expected appliance features work without requiring these extras.

The idea of remotely preheating my oven while I am not home still makes me shudder.


> Monster's counsel had made a horrible mistake, and probably caused lasting harm to the company, by sending me that ridiculous letter

This sort of thing always reminds of the Jack Daniels cease and desist letter[0], which, at least for me, had the exact opposite effect.

0. https://brokenpianoforpresident.wordpress.com/2012/07/19/jac...


You can also slap this in your head to make it discoverable by readers.

    <link rel="alternate" type="application/rss+xml" title="fastcall's Blog" href="/index.xml">

They have a brief paragraph here [1] that indicates they considered these and decided against it: "The team considered several alternative ownership solutions to address these questions, including converting the business to a German non-profit and establishing a foundation. Both of these solutions had constraints, though, and neither offered the mixture of entrepreneurial flexibility and structure security they sought."

Stiftung (Foundation) generally work well if you have a bunch of money and you basically have a fixed "algorithm" that you want to execute around the money like: "Invest it all into an index fund, and in any year in which the fund returns a profit, pay out the profits to the family members of person X in the same ratios that would apply if those people came into X's inheritance". You then appoint a bunch of lawyers to serve as the board of the foundation. Because the "algorithm" is so precisely defined, the set of circumstances where the lawyers do their job wrong will be well-defined, and will constitute a breach in their fiduciary duty. There's basically no room for making entrepreneurial decisions along the way. It's a bit like taking a pile of money, putting it on a ship, putting the ship on autopilot, and giving up any and all direct control of the ship. Depending on what precisely that "algorithm" actually is, this might get you tax advantages. Or it might create non-financial positive outcomes you might be trying to achieve like making sure that your progeny will continue to enjoy the wealth you created for many generations to come while limiting the probability that any one generation can screw it all up for the later generations.

Social entrepreneurship is different from that: A social entrepreneur wants the goodwill and favourable tax treatment that comes from giving up their claim to ownership of the money generated by the business (this is what the gGmbH status does; it's a bit like 501(c)3 in the U.S.) -- But they want to retain control over the business. They want to make entrepreneurial decisions as they go, changing strategies along the way in whatever way they please, without restricting themselves too much to the execution of any predefined programme.

[1] https://purpose-economy.org/en/companies/ecosia/


How so? Isn’t this just the xkcd authorization model?

https://xkcd.com/1200/

I tried to read the article, and know what all the words mean (sel4, enclaves, virtualization primitives, etc.).

It all seems very complicated and error prone, but I couldn’t figure out what the attack model is, or what the security objectives are.

Eg, what sorts of things run in exclaves, and under what circumstances will a persistent kernel level compromise on my laptop protect those things?


In an industrial setting, these robots would be placed behind interlocked barriers and you wouldn't be able to approach them unless they were de-energized. Collaborative robotics (where humans work in close proximity to unguarded robots) is becoming more common, but those robots are weak / not built with the power to carry their 50kg selves around, and they have other industry-accepted safeguards in place, like sensors to halt their motion if they crash into something.

As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.

Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.

If you're an Electron developer (like the apps mentioned), I recommend:

* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.

* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.

* You probably want to rotate your certificates if you ever gave anyone else access.

* Lastly, you should probably be the only one with the keys to your update server.


My favorite alias is `git out`, which just lists all unpushed commits. I use it all the time.

  [alias]
    out = "log @{u}.."
In my head I always hear it in the voice of The Terminator: https://youtu.be/8cdC1Y5oRFg?t=54

A war room / call is for coordination. If you need the person draining the bad region to know that "oh that other region is bad, so we can't drain too fast" or "the metrics look better now".

For truly understanding an incident? Hell no. That requires heads down *focus*. I've made a habit of taking a few hours later in the day after incidents to calmly go through and try to write a correct timeline. It's amazing how broken peoples' perceptions of what was actually happening at the time are (including my own!). Being able to go through it calmly, alone provided tons of insights.

It's this type of analysis that leads to proper understanding and the right follow-ups rather than knee-jerk reactions.


I like this!

It's fun to compare it to "A blog post with every HTML element" [1][2], which gets at a (very!) similar thing but in a very different way. This post primary shows, and is a little more chaotic (meant positively!) whereas the other post is much more prose and explanation heavy (also good, but very different).

[1] https://www.patrickweaver.net/blog/a-blog-post-with-every-ht...

[2] HN discussion: https://news.ycombinator.com/item?id=37104742


Strictly speaking:

1. This is true for did:web but less true for did:plc identities.

2. For did:plc identities to survive a full "bluesky PBC" death, you'd need to to transfer master authority for your PLC identity to a set of keys you control. If you don't then ultimately bluesky PBC would still have final authority over your identity. But if you transfer control to your own keys ahead of time then you can use those keys to make changes long after bluesky PBC's death.


I disagree, you can have an XML feed that does both

For example: https://andrewstiefel.com/feed.xml


Speaking of advocating RSS, I was trying out Nikola [0] for static site generation and found that they have a really nice-looking RSS end-point [1] that is viewable both from the browser and an RSS reader. Looking into the XML, it turns out it's called xml-stylesheet:

    <?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?>
And I would argue that this is an excellent way to introduce new readers to RSS: instead of the browser popping up a download prompt, you can make your RSS feeds themselves a dedicated page for advocating RSS, in case an interested reader is browsing through the links on your site.

[0] https://getnikola.com/

[1] https://getnikola.com/rss.xml (Open it in your browser!)

[2] https://github.com/getnikola/nikola/blob/master/nikola/data/...


Note that (US) governments in particular offer tons of RSS feeds.

Want to keep tabs on what Congress is up to? https://www.govinfo.gov/rss/bills.xml

Want to follow SEC press releases? https://www.sec.gov/news/pressreleases.rss

In WA state and want to follow bills related to schools? https://app.leg.wa.gov/bi/report/topicalindex/?biennium=2025...

The federal government has a big list at https://www.govinfo.gov/feeds. Your county might also have one (e.g. Spokane has https://www.spokanecounty.org/rss.aspx).


About "people still thinking LLMs are quite useless", I still believe that the problem is that most people are exposed to ChatGPT 4o that at this point for my use case (programming / design partner) is basically a useless toy. And I guess that in tech many folks try LLMs for the same use cases. Try Claude Sonnet 3.5 (not Haiku!) and tell me if, while still flawed, is not helpful.

But there is more: a key thing with LLMs is that their ability to help, as a tool, changes vastly based on your communication ability. The prompt is the king to make those models 10x better than they are with the lazy one-liner question. Drop your files in the context window; ask very precise questions explaining the background. They work great to explore what is at the borders of your knowledge. They are also great at doing boring tasks for which you can provide perfect guidance (but that still would take you hours). The best LLMs (in my case just Claude Sonnet 3.5, I must admit) out there are able to accelerate you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: