Does this apply to their internal use as well? They can really only claim DMCA status on the leaked code if it was authored by humans. Claude attribution in their internal git history would make a strong case that they do not in fact own the copyright to Claude Code itself and are therefore abusing the DMCA system to protect leaked trade secrets rather than protect copyright.
According to the US Copyright Office, fully AI-generated works aren’t eligible for copyright because they don’t have human authors. They’re in the public domain by default.
It seems like it's an active area of legal thought (IANAL though).
Recent relevant discussion about this in the chardet repo between the chardet maintainer who relicensed the chardet code and Richard Fontana, a well regarded lawyer US IP lawyer who's worked for Red Hat (now IBM) for decades:
My take away from the conversation there is that being in an edit loop, where the files are AI generated through your control rather than directly editing the files yourself, means the files are then "AI authored" for copyright protection purposes rather than yourself.
But I double stress, I'm not a lawyer so may have misunderstood things radically.
I think that may not be answerable until a case concerning it has been heard and ruled on. A lawyer may have a better answer for you, but if I had to bet then I'd put $100 on it being something like 'it depends'.
It's interesting how AI can be its own worst enemy in this legal system. The very thing it's excellent at is not protected. In practice, there seems to be a strong opportunity to disintermediate brands by acting as a layer of abstraction above the seller and manufacturer. An AI instruction likely cares less about brand or sharing customer information with the seller; it's just more friction and tokens spent.
I think its just a case of dealing with something that has no precedent. We have never had to determine what the line is between a tool and an employee when they can both be instructed with natural language. If we were to evaluate AI as if it were in a contract with us for use of its time and efforts in exchange for something of consideration, it would be an easy ruling. If we were to evaluate AI as if it were a tool which operates as an extension of the operators skill without any independent additions then it would be an easy ruling. But since we now have a tool that can produce results that are independent of our ability to produce them with any former class of tools, then we have to create entirely new models for how to map these tools into the complexity of real life conflicts where people have different goals and where we must decouple fairness from intentions.