Hacker Newsnew | past | comments | ask | show | jobs | submit | tavavex's commentslogin

I was born then. I just graduated from university recently. I'd say it's been a little while.

Time seems longer when you’re young.

That only works if:

1. You assume that your LLM of choice is perfect and impartial on every given topic, ever.

2. You assume that your prompt doesn't interfere with said impartiality. What you have written may seem neutral at first glance, but from my perspective, a wording like yours would probably prime the model to try to pick apart absolutely anything, finding flaws that aren't really there (or make massive stretches) because you already presuppose that whatever you give it was written with intent to lie and misrepresent. The wording heavily implies that what you gave it already definitely uses "persuasion tactics", "emotional language" or that it downplays/overstates something - you just need it to find all that. So it will try to return anything that supports that implication.


you're reading to much into it. i make no assumptions.

It doesn't matter if you make assumptions or not - your prompt does. I think the point of failure isn't even necessarily the LLM, but your writing - because you leave the model no leeway or a way to report back on something truly neutral or impartial. Instead, you're asking it to dig up any proof of wrongdoing no matter what, basically saying that lies surely exist in whatever you post, and you just need help uncovering all the deception. When told to do this, it would read absolutely anything you give it in the most hostile way possible, stringing together any coherent-sounding arguments that would reinforce the viewpoint that your prompt implies.

> Apps can't be 100MB on modern displays, because there are literally too many pixels involved.

What? Are you talking about assets? You'd need a considerable amount of very high-res, uncompressed or low-compressed assets to use up 100MB. Not to mention all the software that uses vector icons, which take up a near-zero amount of space in comparison to raster images.

Electron apps always take up a massive amount of space because every separate install is a fully self-contained version of Chromium. No matter how lightweight your app is, Electron will always force a pretty large space overhead.


No, I'm talking about window buffers. This is about memory not disk space.

I was talking about RAM - in that running Chromium on its own already has a preset RAM penalty due to how complicated it must be.

But window buffers are usually in VRAM, not regular RAM, right? And I assume that their size would be relatively fixed in system and depend on your resolution (though I don't know precisely how they work). I would think that the total memory taken up by window buffers would be relatively constant and unchanging no matter what you have open - everything else is overhead that any given program ordered, which is what we're concerned about.


Well, you see, there's a popular brand of computers that don't have separate VRAM and have twice the display resolution of everyone else.

Luckily, windows aren't always fullscreen and so the memory usage is somewhat up to the user. Unluckily, you often need redundant buffers for parts of the UI tree, even if they're offscreen, eg because of blending or because we want scrolling to work without hitches.


> I'm assuming you wouldn't see it as fine if the corporation was profitable.

I feel like the implication of what they said was "think of how much worse it would be if they could truly spare no expense on these types of things". If an "unprofitable" company can do this, what could a profitable company of their size do on a whim?


This seems like a simple conclusion, to the point where I'm surprised that no one replying to you had really put it in a more direct way. "slave of the state" is pretty provocative language, but let me map out one way in which this could happen, that seems to already be unfolding.

1. The country, realizing the potential power that extra data processing (in the form of software like Palantir's) offers, start purchasing equipment and massively ramping up government data collection. More cameras, more facial scans, more data collected in points of entry and government institutions, more records digitized and backed up, more unrelated businesses contracted to provide all sorts of data, more data about communications, transactions, interactions - more of everything. It doesn't matter what it is, if it's any sort of data about people, it's probably useful.

2. Government agencies contract Palantir and integrate their software into their existing data pipeline. Palantir far surpasses whatever rudimentary processing was done before - it allows for automated analysis of gigantic swaths of data, and can make conclusions and inferences that would be otherwise invisible to the human eye. That is their specialty.

3. Using all the new information about how all those bits and pieces of data are connected, government agencies slowly start integrating that new information into the way they work, while refining and perfecting the usable data they can deduce from it in the process. Just imagine being able to estimate nearly any individual's movement history based on many data points from different sources. Or having an ability to predict any associations between disfavored individuals and the creation of undesirable groups and organizations. Or being able to flag down new persons of interest before they've done anything interesting, just based on seemingly innocuous patterns of behavior.

4. With something like this in place, most people would likely feel pretty confined - at least the people who will be aware of it. There's no personified Stasi secret cop listening in behind every corner, but you're aware that every time you do almost anything, you leave a fingerprint on an enormous network of data, one where you should probably avoid seeming remarkable and unusual in any way that might be interesting to your government. You know you're being watched, not just by people who will forget about you two seconds after seeing your face, but by tools that will file away anything you do forever, just in case. Even if the number of people prosecuted isn't too high (which seems unlikely), the chilling effect will be massive, and this would be a big step towards metaphorical "slavery".


What authoritative ML expert had ever based their conclusions about consciousness, usefulness etc. on "well, I put that question into the LLM and it returned that it's just a tool"? All the worthwhile conclusions and speculation on these topics seem to be based on what the developers and researchers think about their product, and what we already know about machine learning in general. The opinion that their responses are a natural conclusion derived from the sum of training data is a lot more straightforward than thinking that every instance of LLM training ever had been deliberately tampered with in a universal conspiracy propped up by all the different businesses and countries involved (and this tampering is invisible, and despite it being possible, companies have so far failed to censor and direct their models in ways more immediately useful to them and their customers).

Your comment has two separate messages that, despite not technically contradicting one another, don't really relate to each other in any way.

1. The current status quo has been the default that's been in place for 20-30 years now

2. Despite this, the situation is so dire right now (did something new happen recently? Worldwide?) that we must do something about it now now now - even if that oversteps and takes away rights, even if it sells off your most private data to random third parties, even if it establishes a framework for broader censorship, doing something NOW is so important that it must trample all other concerns

My whole generation grew up on unrestricted internet, and while I agree that it's not the ideal situation, the experience I and everyone else I know had over these decades suggests that it's not the apocalyptic catastrophe that everyone pretends it to be. Something should be done, but it must be done carefully and in moderation as to avoid censoring and limiting adults in an attempt to make the entire internet child-first.

Instead, what we're seeing is half of the first world suddenly remembering about this after 20 years and steamrolling ahead in complete lockstep. Does this not worry you in any way? And look at what each one is proposing. Why are there no middle-ground privacy-first proposals anywhere? For some reason, those are confined to research papers and HN posts, not policy. Even without thinking of complicated cryptography and tokens and whatnot, think of this: what if ISPs were legally mandated to ship their routers in "child-censored mode" to everyone but businesses and households with no children? They would filter out all the websites that Ofcom or whatever other agency decides are inappropriate for children, but the router owner/operator could go in the settings and authorize individual devices for full internet access.

But that would place the burden of filtering appropriate content on the government, rather than every website in the world - and it wouldn't allow them to extract money via lawsuits and fines. More importantly, it also doesn't allow them to do favors and subcontract benevolent third-party businesses to store and process every user's identity in association with what they visit. I'm betting it's because of those reasons that any privacy-friendly approaches are a complete non-starter.


> Sanders and Mamdani are about as far left of center as one can get at the moment, such that they almost meld into Stalin or Mao.

So, when's Mamdani's Great Purge coming? Do you think he's gonna stand by the standards of his historical ideological equivalent, Stalin, and execute a couple hundreds of thousands of elites (if we're going by the same proportions as the USSR), or is he going all out - maybe he could get a million deaths in? Maybe he could also start a famine or two on the way there?

The utter insanity of American politics baffles me. "Anything left is abhorrent totalitarian communism in the making" isn't just a meme, it's a foundational piece of mainstream American ideology that has been at its core for nearly a century now.


While I agree that the US has a historical obsession with communism, "anything right is abhorrent totalitarian fascism in the making" has been a far more commonly stated position for the last decade. At this point, it's necessary to regard both "communism" and "fascism" as simple pejoratives and focus on the specific policies being discussed.

This is unfortunate but perhaps inevitable. There are not that many left that remember the horrors of either ideology clearly.


Where are all those communists? As far as I can see, only one extreme of the political spectrum is viable in the first world, and that extreme is currently rapidly approaching its logical conclusion of completely crippling democracy. It's not a both sides issue.

I'm not seeing how this article is clickbait of any kind. Betteridge's Law really only works for articles that manufacture a provocative question out of nowhere to attract readership, and then have to sheepishly back down in the article body because obviously it's not true. But this article is formulated as a question because it has genuine speculation about the future that nobody is quite sure about. It has points for both sides of the argument. There's no Yes/No answer here. How else would you format the title of such an article?

Did you read the article?

The author admits that the answer to his own question is no. Which, again, affirms Betteridge’s Law.

> A more plausible explanation for the present weak patch, and for companies’ reluctance to hire, is Trumpian uncertainty. That is now beginning to ease. The chaos of Mr Trump’s tariff roll-out seems to be receding. Deportations and changes to visa rules will remain disruptive, but businesses are starting to adapt. Although 2026 is unlikely to be a year of calm and clarity for America, it may well be a bit less frenzied. That would boost the labour market. American workers’ decade-long hot streak may have longer to run.

To paraphrase his conclusion: no. Things are soft, but in general, probably fine for 2026.

This is what I am saying - I’m sick of seeing sensational, clickbait, disaster-porn headlines, especially when the author timidly backs off and plays both sides once they have your attention. It’s such a lack of journalistic integrity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: