Hacker Newsnew | past | comments | ask | show | jobs | submit | mk67's commentslogin

It lacked Trump's face and branding.


Working at SAP and working language is English, also in Germany.


No, in basically all countries even verbal contracts are valid and enforceable.


But verbal between whom? How can one be sure they're talking to a human on the other end of an SMS and not a chatbot?

We already went over how this doesn't work more than a year ago with the $1 Tahoe [0]. Spoiler: no car changed hands based on that "agreement".

[0] https://jalopnik.com/chevrolet-dealer-ai-help-chatbot-goes-r...


On the other hand, Air Canada was forced to honor a refund policy made up by a chatbot [1]. That was in Canada, not the US, but it nonetheless points out to courts willing to accept that a promise made by a chatbot you programmed to speak in your name is just as good as a promise you made yourself.

[1] https://arstechnica.com/tech-policy/2024/02/air-canada-must-...


At least in the US, establishing a legal contract requires more than just an attestation and agreement by both parties (verbal or written or telegraphed or whatever).

For example it’s not a contract if there is no “consideration”, a legal term meaning the parties have exchanged something of value.

IANAL, but “abuse of telecom resources” is the more likely flavor of legal hot-water you might land in. I would absolutely not worry about a fraudster taking me to court.


Contract requires "meeting of minds", i.e. intentional assent from both sides. I am not sure text generated by fully automated bot can be treated as intentional assent.


All this non-lawyer programmer legal analysis is always fun because no one really knows. When I send email aren't I just telling my email "robot" to do something? This is one layer beyond that, my 'llm robot' is sending text messages on my behalf.


When you send an email, there's your conscious intent behind it. So it doesn't matter what technology is in between, as long as your mind is moving it. If you didn't intend it (as in, I know you are on vacation and send you an email saying "if you agree to pay me $1000 send me back a vacation reply" then your mail system sending me a vacation reply does not constitute an intentional action, because it would send the reply to anything. It is true that I am not a lawyer, but laws often make sense, and derive from common sense. Not always, but is such a fundamental matter as contracts they usually do make sense.


That's a good example. But that auto reply is a kind of bot. "Sensible" is just separate from what's legally actionable in too many cases. I do see llms as just that next step in auto replay. We already know companies use them to process your text requests / descriptions when getting help, and they auto-answer things and there are endless stories even today of awful unsuitable responses triggered on llm systems.


All true, but these llm systems aren't random, there's certain intent behind them, they are supposed to do something. So if they do what they are supposed to, then the intent - which is human intent - exists, but it's something that the human creator of the tool did not intend, I don't think any human court would recognize it as a basis for a contract.


Except when they are not. In Europe you get a contract following such agreement and you have time to refuse it.

This is one of the reasons tele-sales do not work that well here (telemarketing is still an abomination, though)


Enforceable but not necessarily enforced.


It definitely will be if you go to court. As soon as you have any witnesses there is little chance to get out of a verbal contract.


This is a gross simplification of the law. There isn't some "gotcha" like some schoolyard disagreement. "I gotcha! You said it! Derik heard it you gotta do it now! Do it Do it! Do it!"

Yes, you can enforce a verbal contract. You'll need to show what exactly you agreed to which is going to be vague due to the nature of a verbal contract. You'll need to show an offer and acceptance, consideration, intention to create legal relations, legal capacity, and certainty. So no, you can't offer to buy your buddy's car for $1 when you're at the bar grabbing a beer and have them say, "haha, deal" and expect to get their car.


Very nonsensical statement, as it helps a lot since decades and allows faster traffic flow in roundabouts.


How does it allow faster traffic flow?

You haven't been driving long (or at all) if you think trusting other drivers' turn signals is a good idea. Theories that look good through a bus window often don't work as well when viewed through a windshield.


Same when I interviewed ~1-2 years ago.


From what I read it's not prosecuted in San Francisco e.g. anymore.



I'm in the industry and nobody does that since over ten years. There was just a small phase when Hinton published "Greedy layer-wise training of deep networks" in 2007 and people did it for a few years at most. But already with the rise of LSTMs in the 2010s this wasn't done anymore and now with transformers also not. Would you care to share how you reached your conclusion as it matches 0 of my experience over the last 15 years and we also train large-scale LLMs in our company. There's just not much point to it when gradients don't vanish.


Why don't gradients vanish in large scale LLMs?


Not easy to give a concise answer here, but let me try:

The problem mainly occurs in networks with recurrent connections or very deep architectures. In recurrent architectures this was solved via LSTMs with the signal gates. In very deep networks, e.g. ResNet, this was solved via residual connections, i.e. skip connections over layers. There were also other advances, such as replacing sigmoid activations with the simpler ReLU.

Transformers, which are the main architecture of modern LLMs, are highly parallel without any recurrence, i.e. at any layer you still have access to all the input tokens, whereas in an RNN you process one token at a time. To solve the potential problem due to "deepness" they also utilize skip connections.


I thought sunlight and water also kills them.


I think the original idea was vampires don't stay dead unless they're staked through the heart.


There is no LLM in Tesla cars. Otherwise correct. I'd rather expect they use a convnet or a vision transformer, probably the former.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: