Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting how AI can be its own worst enemy in this legal system. The very thing it's excellent at is not protected. In practice, there seems to be a strong opportunity to disintermediate brands by acting as a layer of abstraction above the seller and manufacturer. An AI instruction likely cares less about brand or sharing customer information with the seller; it's just more friction and tokens spent.


I think its just a case of dealing with something that has no precedent. We have never had to determine what the line is between a tool and an employee when they can both be instructed with natural language. If we were to evaluate AI as if it were in a contract with us for use of its time and efforts in exchange for something of consideration, it would be an easy ruling. If we were to evaluate AI as if it were a tool which operates as an extension of the operators skill without any independent additions then it would be an easy ruling. But since we now have a tool that can produce results that are independent of our ability to produce them with any former class of tools, then we have to create entirely new models for how to map these tools into the complexity of real life conflicts where people have different goals and where we must decouple fairness from intentions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: