Hacker Newsnew | past | comments | ask | show | jobs | submit | ETH_start's commentslogin

It's very possible that his partner is fully aware of and supportive of his mission. And I do agree that he should ensure that this is something his afflicted partner wants.

One point I want to make though is that even if someone embarks on a mission like this and fails, what they learn in the process — and uncover for the world at large — can help the next generation. It's not futile. It's not in vain.


Please be careful not to put words in my mouth.

Yes I was just adding, was not contradicting.

Saying "investors with hundreds of billions decided it" makes it sound like a few people just chose the outcome, when in reality prices and capital move because millions of consumers, companies, workers, and smaller investors keep making choices every day. Big investors only make money if their decisions match what people actually want; they can't just command success. If they guess wrong, others profit by allocating money better, so having influence isn't the same as having control.

The system isn't mathematically perfect, but that doesn't make it arbitrary. It works through an evolutionary process: bad bets lose money, better ones gain more resources.

Any claim that the outcome is suboptimal only really means something if the claimant can point to a specific alternative that would reliably do better under the same conditions. Otherwise critics are mostly just expressing personal frustration with the outcome.


No, automation doesn't reduce jobs, i.e. doesn't reduce consumer spending, as consumer spending is determined by output, which automation boosts.

The savings from automation in a particular sector are spent elsewhere — wherever services are more costly (in labor). That's the dynamic behind Say's law, which shows that spending on less automatable jobs like barbers and physical therapists increases as automation reduces costs in other sectors of the economy.


I understand this is a well-developed economic theory and I am complete uninformed, but this doesn't make intuitive sense at all.

If 1 million prep cooks are replaced by robots, will food become cheap enough that those prep cooks can all get jobs as barbers, and the money people spend on food will shift to haircuts?

Will the food be so cheap that all those prep cooks can afford to learn to cut hair?

Also consider the money velocity of a human vs a robot. A human is probably paycheck to paycheck spending everything they earn. Robot earnings go back to company, which makes the stock go up, 90% of which is owned by billionaires who just keep hoarding and hoarding.


Adaptation does not require mass retraining into new professions; it happens through task simplification, AI-augmented shallow competence (less qualified people can do more advanced work), partial work, income stacking, and lower subsistence costs. As automation advances, less-automatable sectors (personal services, care, local physical work) see wage pressure rise, consistent with Say’s Law, because yes, what people save at restaurants, is spent instead at barbers, massage therapists, nail technicians, etc.

As for the gains from robotics, they go just as much to workers as to investors. Remember, investors are competing with each other, so they have to keep cutting prices. And that means workers see their wages buy more goods and services, given those goods and services cost less to buy. When wages buy more, that's effectively the opposite of inflation. In inflation-adjusted terms, that equates to a wage hike.


Bingo. Overall it's a massive plus.


It takes real effort to maintain a solid understanding of the subject matter when using AI. That is the core takeaway of the study to me, and it lines up with something I have vaguely noticed over time. What makes this especially tricky is that the downside is very stealthy. You do not feel yourself learning less in the moment. Performance stays high, things feel easy, and nothing obviously breaks. So unless someone is actively monitoring their own understanding, it is very easy to drift into a state where you are producing decent-looking work without actually having a deep grasp of what you are doing. That is dangerous in the long run, because if you do not really understand a subject, it will limit the quality and range of work you can produce later. This means people need to be made explicitly aware of this effect, and individually they need to put real effort into checking whether they actually understand what they are producing when they use AI.

That said, I also think it is important to not get an overly negative takeaway from the study. Many of the findings are exactly what you would expect if AI is functioning as a form of cognitive augmentation. Over time, you externalize more of the work to the tool. That is not automatically a bad thing. Externalization is precisely why tools increase productivity. When you use AI, you can often get more done because you are spending less cognitive effort per unit of output.

And this gets to what I see as the study's main limitation. It compares different groups on a fixed unit of output, which implicitly assumes that AI users will produce the same amount of work as non-AI users. But that is not how AI is actually used in the real world. In practice, people often use AI to produce much more output, not the same output with less effort. If you hold output constant, of course the AI group will show lower cognitive engagement. A more realistic scenario is that AI users increase their output until their cognitive load is similar to before, just spread across more work. That dimension is not captured by the experimental design.


I'd like to see the same kind of scrutiny of private information handling by conventional government departments.

https://www.moneyness.ca/2024/07/your-finances-are-being-sno...

>472 different U.S. law enforcement agencies at the Federal, state, and local levels have the ability to directly query FinCEN's database of CTRs, suspicious activity reports, and more. This amounts to around 14,000 law enforcement officers who can search through the personal financial data of American citizens. In 2023, these 14,000 users conducted 2.3 million searches using FinCEN's query tool.

>FinCEN's data can also be downloaded in bulk form to the in-house servers of eleven different federal agencies, including the FBI, ICE, and the IRS. Bulk access (also known as Agency Integrated Access) means that the FBI, ICE, IRS, and eight other agencies don't need to use FinCEN's query tool. This bulk data can be access by another 35,000 agents. Alas, FinCEN doesn't track how many in-house searches were conducted by these agents in 2023, but I'd guess it's in the tens if not hundreds of millions.


Yes but in this case we have caught someone red-handed, why would you bring this other stuff up other than to distract from this known abuse? Sticking with topic at hand, do think we should prosecute those involved in this theft of data?


When private information is as widely shared as it currently is by government bodies, I think it's important to point out selective coverage of privacy violations, while the systemic encroachment of privacy rights continues unabated by 20 plus agencies.

In fact, a contrarian political entity created by an outside party — as the DOGE is/was — may be the kind of shock to major institutions that could lead to real positive change, in terms of greater transparency into and accountability over how these major institutions operate.


You won't; at least not in the Grauniad.


It would be interesting to see the list of past trees. The most famous I can think of Donar's Oak (also called Thor's Oak), which was revered by Germanic pagans, and felled by Saint Boniface.


>US on the other hand, has flatlined to the point where we think stuff like trans athletes in sports are a drastic enough reason to elect a president who is a convicted Felon.

This is very one-sided and unfair. The trans stuff is indicative of a larger social movement. For example, in the U.S., it would be illegal to use IQ tests to hire employees while in China, that's practiced. China is far more meritocratic. The U.S. is driven far more by ideology, and the trans stuff is an example of that.

And someone on the other side of the aisle would point to the prosecution of Donald Trump as politically motivated, where opponents found an obscure law that he violated and charged him with 34 counts based on the 34 forms he submitted with the expense mislabelling.


> China is far more meritocratic. The U.S. is driven far more by ideology, and the trans stuff is an example of that.

I'm guessing you never lived and worked in China before? People who get jobs because of guanxi are not rare, even today, and ideology is far more important in China than in the US, it is just that the ideology is very different from what people are used to in the states.


China definitely relies on ideology quite a bit, the difference is the government controls that ideology because they understand correctly that the people can't be trusted.


It is absolutely not illegal in the US to use IQ tests to hire. This is a persistent Internet myth.


The Chinese government’s territorial claims in the South China Sea show near-total disregard for international law. China has constructed heavily militarized artificial islands roughly 200 kilometers from the Philippine coast — and more than 1,000 kilometers from the Chinese mainland — in order to assert control over waters that, under the UN Convention on the Law of the Sea and a binding 2016 ruling by an international arbitral tribunal, lie squarely within the Philippines’ exclusive economic zone. China lost the case on the merits and simply rejected the ruling.


It's always the same pattern. Point to a genuine evil and then use that as justification to strip everyone of their rights.


Like gun control.


The right to life (not to be shot) never seems as important as the right to take a life with US gun folk, it seems mad from the outside.


I'm on the "outside" of this argument - never owned a gun yet and not in the US, but the right to life (not to be shot) can be exercised by protecting oneself from guns, with a gun.

Here we're discussing how attacks against privacy are totalitarian and how more and more governments are on their way to become totalitarian regimes, but we don't agree that people having guns is a good defense against a totalitarian government. We talk about police or ICE overreach, but don't talk about what would happen if that overreach expands even more.


That's kind of a jump. The 2a is cool, but gun deaths outpace car deaths now and 2a people refuse literally any of the protections we have against car deaths. Whereas a 15 year old jerking it to a pornstar hurts no one and these people want to completely ban the 4th amendment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: