Anthropic agrees to pay $ 1.5 billion to settle collective appeals on the author on AI training
Anthropic told a federal judge from San Francisco on Friday that he had agreed to pay $ 1.5 billion to settle a collective appeal of a group of authors who accused the artificial intelligence of using hacked copies of their books to form his chatbot AI, Claude, without authorization.
Anthropic and the complainants in a judicial file asked the judge of the American district William Alsup to approve the settlement, after having announced the agreement in August without disclosing the conditions or the amount.
“If it is approved, this historic regulation will be the greatest recovery of copyright reported publicly in history, greater than any other regulation of collective appeal in copyright or any case of individual copyright pleading on the final judgment,” said the complainants in the file.
The proposed agreement marks the first regulation in a series of proceedings against technological companies, in particular OPENAI, Microsoft and META platforms on their use of copyright protected equipment to form generative AI systems.
As part of the colony, Anthropic said that it would destroy copies downloaded from books acquired via Libgen and Pilimi (Pirate Library Mirror). As part of the agreement, he could still deal with complaints against counterfeits related to the equipment produced by the company’s AI models.
In a press release, Anthropic said that the company “is committed to developing safe AI systems that help people and organizations extend their capacities, advance scientific discovery and solve complex problems.” The agreement does not include admission of responsibility.
“This historic regulation is an essential step to recognize that IA companies cannot simply steal the creative work of the authors to build their AI simply because they need books to develop quality LLM,” the CEO of guild authors, Mary Rasenberger, said in a statement.
“These very rich companies, worth billions, have stolen from those who earn a median income of barely $ 20,000 [US] One year. These regulations send a clear message that AI companies must pay for the books they use just as they pay for other essential elements of their LLM. “”
Although seven million pounds were downloaded by Anthropic from hacking sites, according to author Guild, only around 500,000 works are covered in collective appeal, which means that the regulations amount to around $ 3,000 per author.
The writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed a collective appeal against Anthropic last year. They argued that the company, supported by Amazon and Alphabet, illegally used millions of pirated books to teach its AI CLAUDE assistant to respond to human guests.
Stolen creative work
The allegations of writers have echoed dozens of other proceedings brought by authors, media, visual artists and others who say that technological companies have stolen their work to use in AI training.
Companies have argued that their systems use the equipment protected by copyright fair to create new transformer content.
Alsup ruled in June that Anthropic made a fair use of books to train Claude, but found that the company raped its rights by saving more than seven million pounds hacked at a “central library” which would not necessarily be used for this purpose.
A successful author of Vancouver launched a collective appeal against Nvidia, Meta and two other giants of technology. JB Mackinnon says that the books that he and other Canadian authors have written, were illegally used to form artificial intelligence models.
A trial was to start in December to determine the anthropogenic quantity due for alleged hacking, with potential damage ranging to hundreds of billions of dollars.
The central question of fair use is still being debated in other cases of copyright of the AI.
Vancouver’s author, JB Mackinnon, recently launched collective appeals against Nvidia, Meta, Anthropic and Databricks Inc. at the British Columbia Supreme Court, alleging that his Canadian works and other authors were illegally used for AI.
Another judge from San Francisco hearing a similarly undergoing prosecution against Meta ruled shortly after the decision of Alsup according to which the use of the work protected by copyright without the authorization to form AI would be illegal in “many circumstances”.





