Hacker Newsnew | past | comments | ask | show | jobs | submit | nemomarx's commentslogin

If a common AI tool produces latex documents, the association will be created yeah. Right now latex would be a high indicator of manual effort, right?

don't think so. I think latex was one of academics' earlier use cases of chatgpt, back in 2023. That's when I started noticing tables in every submitted paper looking way more sophisticated than they ever did. (The other early use case of course being grammar/spelling. Overnight everyone got fluent and typos disappeared.)

It's funny, I was reading a bunch of recent papers not long ago (I haven't been in academia in over a decade) and I was really impressed with the quality of the writing in most of them. I guess in some cases LLMs are the reason for that!

I recently got wrongly accused of using LLMs to help write an article by a reviewer. He complained that our (my and my co-worker's) use of "to foster" read "like it was created by ChatGPT". (If our paper was fluent/eloquent, that's perhaps because having an M.A. in Eng. lit. helped for that.)

I don't think any particular word alone can be used as an indicator for LLM use, although certain formatting cues are good signals (dashes, smileys, response structure).

We were offended, but kept quiet to get the article accepted, and we changed some instances of some words to appease them (which thankfully worked). But the wrong accusation left a bit of a bad aftertaste...


If you’ve got an existing paragraph written that you just know could be rephrased more eloquently, and can describe the type of rephrasing/restructuring you want… LLMs absolutely slap at that.

LaTeX is already standard in fields that have math notation, perhaps others as well. I guess the promise is that "formatting is automatic" (asterisk), so its popularity probably extends beyond math-heavy disciplines.

> Right now latex would be a high indicator of manual effort, right?

...no?

Just one Google search for "latex editor" showed more than 2 in the first page.

https://www.overleaf.com/

https://www.texpage.com/

It's not that different from using a markdown editor.


I think people broadly feel like all of this is happening inevitably or being done by others. The alignment people struggle to get their version of AI to market first - the techies worry about being left behind. No one ends up being in a position to steer things or have any influence over the future in the race to keep up.

So what can you and I do? I know in my gut that imagining an ideal outcome won't change what actually happens, and neither will criticizing it really.


In the large, ideas can have a massive influence on what happens. This inevitability that you're expressing is itself one of those ideas.

Shifts of dominant ideas can only come about through discussions. And sure, individuals can't control what happens. That's unrealistic in a world of billions. But each of us is invariably putting a little but of pressure in some direction. Ironically, you are doing that with your comment even while expressing the supposed futility of it. And overall, all these little pressures do add up.


How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources? It would need for all these billions to collectively understand the problem and align towards a common goal. I don't think this was a design feature, but globalising the economy created hard dependencies and the internet global village created a common mind share. It's now harder than ever to effect a revolution because it needs to happen everywhere at the same time with billions of people.

> How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources?

By things like: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act

and: https://www.scstatehouse.gov/sess126_2025-2026/bills/4583.ht... (I know nothing about South Carolina, this was just the first clear result from the search)


To be clear - the sociopaths and the culture of resource domination that generates and enables them is the real problem.

AI on its own is chaotic neutral.


>So what can you and I do?

Engage respectfully, Try and see other points of view, Try and express your point of view. I decided some time ago that I would attempt to continue conversations on here to try and at least get people to understand that other points of view could be held by rational people. It has certainly cost me Karma, but I hope there has been a small amount of influence. Quite often people do not change their minds by losing arguments, but by seeing other points of view and then given time to reflect.

>I know in my gut that imagining an ideal outcome won't change what actually happens

You might find that saying what you would like to see doesn't get heard, but you just have to remember that you can get anything you want at Alice's Restaurant (if that is not too oblique of a reference)

Talk about what you would like to see, If others would like to see that too, then they might join you.

I think most people working in AI are doing so in good faith and are doing what they think is best. There are plenty of voices telling them how not to it, many of those voices are contradictory. The instances of people saying what to do instead are much fewer.

If you declare that events are inevitable then you have lost. If you characterise Sam Altman as a sociopath playing the long game of hiding in research for years just waiting to pounce on the AI technology that nobody thought was imminent, then you have created a world in you mind where you cannot win. By imagining an adversary without morality it's easy to abdicate the responsibility of changing their mind, you can simply declare it can't be done. Once again choosing inevitability.

Perhaps try and imagine the world you want and just try and push a tiny fraction towards that world. If you are stuck in a seaside cave and the ocean is coming in, instead of pushing the ocean back, look to see if there is an exit at the other end, maybe there isn't one, but at least go looking for it, because if there is, that's how you find it.


Hypothetically, however, if your adversary is indeed without morality, then failing to acknowledge that means working with invalid assumptions. Laboring under a falsehood will not help you. Truth gives you clear eyed access to all of your options.

You may prefer to assume that your opponent is fundamentally virtuous. It's valid to prefer failing under your own values than giving them up in the hopes of winning. Still, you can at least know that is what you are doing, rather than failing and not even knowing why.


This lacks any middle ground. I mean, is world divided between adversaries and allies? Are assumptions either valid or invalid? Are humans either "fundamentally virtuous" or "without morality"?

Such crude model doesn't help in navigating the reality at all.


I'm sure you could get an LLM to create a plausible sounding justification for every decision? It might not be related to the real reason, but coming up with text isn't the hard part there surely

> I'm sure you could get an LLM to create a plausible sounding justification for every decision.

That's a great point: funny, sad, and true.

My AI class predated LLMs. The implicit assumption was that the explanation had to be correct and verifiable, which may not be achievable with LLMs.


It seems solvable if you treat it as an architecture problem. I've been using LangGraph to force the model to extract and cite evidence before it runs any scoring logic. That creates an audit trail based on the flow rather than just opaque model outputs.

It's not. If you actually look at any chain-of-thought stuff long enough, you'll see instances where what it delivers directly contradicts the "thoughts."

If your AI is *ist in effect but told not to be, it will just manifest as highlighting negative things more often for the people it has bad vibes for. Just like people will do.


Yes, they will, they'll rationalize whatever. This is most obvious w/ transcript editing where you make the LLM 'say' things it wouldn't say and then ask it why.

It sounds like you're saying we should generate more bullshit to justify bullshit.

They said "could", not "should".

I believe the point is that it's much easier to create a plausible justification than an accurate justification. So simply requiring that the system produce some kind of explanation doesn't help, unless there are rigorous controls to make sure it's accurate.


Why windows and not their homes walls? People rarely tag windows in my experience, or cars.

Do you actually think ICE cares about your legal citizenship status?

Yes. That's very relevant to their aims.


That will change. Soon.

Oh okay!

If you design it so you don't have access to the data, what can they do? I'm sure there's some cryptographic way to avoid Microsoft having direct access to the keys here.

If you design it so you don't have access to the data, how do you make money?

Microsoft (and every other corporation) wants your data. They don't want to be a responsible custodian of your data, they want to sell it and use it for advertising and maintaining good relationships with governments around the world.


> If you design it so you don't have access to the data, how do you make money?

The same way companies used to make money, before they started bulk harvesting of data and forcing ads into products that we're _already_ _paying_ _for_?

I wish people would have integrity instead of squeezing out every little bit of profit from us they can.


People arguably cannot have integrity unless all other companies they compete with also have integrity. The answer is legislation. We have no reason to allow our government to use “private” companies to do what they cannot then turn over the results to government agencies. Especially when willfully incompetence.

The same can be said of using “allies” to mutually snoop on citizens then turning over data.


I think you’re conflating lots of different types of data into one giant “data.”

Microsoft does not sell / use for advertising data from your Bitlocked laptop.

They do use the following for advertising:

Name / contact data Demographic data Subscription data Interactions

This seems like what a conspiracy theorist would imagine a giant evil corporation does.

https://www.microsoft.com/en-us/privacy/usstateprivacynotice


What are you talking about?

> I'm sure there's some cryptographic way to avoid Microsoft having direct access to the keys here.

FTA (3rd paragraph): don't default upload the keys to MSFT.

>If you design it so you don't have access to the data, what can they do?

You don't have access to your own data? If not, they can compel you to reveal testimony on who/what is the next step to accessing the data, and they chase that.


Computers are banned in everything except specific tournaments for computers, yeah. If you're found out to have consulted one during a serious competition your wins are of course stripped - a lot of measures have to be taken to prevent someone from getting even a few moves from the model in the bathroom at those.

Not sure how smaller ones do it, but I assume watching to make sure no one has any devices on them during a game works well enough if there's not money at play?


The simpler part is to say that AI generated text / code is not a contribution and will be banned if found, probably.

You won't get a hundred percent hit rate on identifying it, but it at least filters really low effort obvious stuff?


The American tendency to move away from family earlier is probably involved.

Pricing anything into the cost of food would be political poison. Paying farmers to grow nothing is considered preferable to that

It's not always about price. Paying farmers to grow nothing ensures they stay open if we need them to grow something.

When I farmed we had set aside land paid for by the government. When there were predicted shortages on food in the future, we were allowed to farm that ground.

You don't want farmers going under. It just takes one bad year that way and we're all fucked. I've never lived through a proper famine, but Grandpa talked about the dust bowl and depression. It sounded fucking awful.


This exactly.

The fuss made about agricultural subsidies by non-farmers is misguided. Dropping subsidies doesn't make food cheaper, it makes it go away.

Consumers are addicted to cheap food, so they pay taxes instead to make up the difference. Given a progressive tax system this actually is a very efficient approach to take. And overall, as a % of the total budget, these subsidies are insignificant.

What is hurting farmers are reduced markets. USAid used to buy up a lot of surplus production (effectively a back-door subsidy), lots got exported to China et al. Given the economic antagonism towards the US (thanks to things like tarifs and insults) demand for US food exports either dropped naturally (eg Canada) or with reciprocal tarifs (eg China).

Politicians like to say "we don't make things here anymore" ignoring the most fundamental production of all (farming). They destabilize foreign trade, and (if we look at more labor intensive crops) target farm workers for deportation.

To be fair, agriculture states are also red states, so it's fair to say they voted for this.


>What is hurting farmers are reduced markets

I know there is a rule about reading the article, but did you? This [trend] is nothing new, USAid has nothing to do with it other than short term changes.


The vast majority of countries have barriers preventing our highly efficient production from selling in their countries. Think Argentina and meats, Switzerland and all things cattle, EU and pretty much everything.

Tariffs were one way to pry open those markets, but of course, the few agricultural products that were already selling , were affected in the retailiation . It will take some time for things to sort out.


Not surprisingly most countries want to be self sufficient with food production, so tarifs on food imports makes sense.

Unfortunately though I don't think US tarifs are the solution here. Leaving aside that antagonizing the end-consumer seems unproductive (eg canada) there's also a perception in Europe that US food products (especially meat) are of low quality.

Whether that perception US true or not US immaterial. (My own visits to the US and experience of US food would suggest the US optimizes for quantity not quality, but anecdotes are not data.)

Much of the barrier with exporting beef are the higher food standards, and documentation, required in Europe. Lowering the standards doesn't seem to be politically acceptable either.


That’s what foreign aid is for:

1. Keep strategic production capacity alive.

2. Spread American soft power.

3. Get warm fuzzy feelings because you prevented millions of people from dying of starvation.


No, the US will not depend on foreign aid to [primarily] feed it's citizens. Never going to happen.

You've got it backwards - foreign aid using US grown crops provides increased very stable demand. Take any excess grains made in a given year and ship them to another country, the farmers get paid well for it so they keep their productive capacity high, and the marginal cost of getting it to a charity overseas is low anyway. This means there's always enough grain to feed our citizens.

And it keeps foreign countries dependent on us and gives us another avenue to coerce them. Wins for us all around.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: