AI Industry vs Copyright Law: the 2024 battlefield

by | Jan 23, 2024 | Alumni, Digital, International Law | 0 comments

In 2023, with OpenAI’s ChatGPT and other competitors going mainstream, artificial intelligence (AI) gained momentum and general acceptance but also exposed most people to how AI tools function—and sometimes “hallucinate“.
It has been a watershed moment in AI policy and regulation, with the European Union passing its AI Act, the first comprehensive AI law, the United States President issuing executive orders and the Senate holding hearings, and China mainly focusing on specific restrictions for recommender algorithms. Globally, the UK Prime minister hosted the first AI safety summit in November, and in December, the UN Secretary-General, António Guterres, welcomed the newly established High-Level Advisory Body on AI in New York.

As Microsoft-backed chatbot ChatGPT and other generative artificial intelligence (GenAI) technologies have become pervasive in 2023, we expect an acceleration with more activity on the policy and regulatory front in 2024. The EU AI Act is set to become law, and the US Congress is converging on an AI bipartisan legislative agenda. 

However, this year, the main challenge to the AI industry might not come from governments or lawmakers but rather from the judiciary system. Particularly in the United States, the upcoming fights over copyright in AI are more likely to shape the technology’s trajectory than any regulatory or legislative action.

At the heart of the matter lie concerns about using copyrighted material to train AI systems and generate results that may be substantially similar to existing copyrighted works.

With interpretations of copyright law set to provide a substantial threat to the sector in 2024, legal battles are predicted to affect the future of AI innovation. They may even alter the industry’s economic models and overall direction.

According to tech companies, the lawsuits could create massive barriers to the expanding AI sector. On the other hand, the plaintiffs claim that the firms owe them payment for using their work without fair compensation or authorisation.

Legal Challenges and Industry Impact

AI programmes that generate outputs comparable to existing works could infringe on copyrights if they had access to the works and produced substantially similar outcomes.

In late December 2023, the New York Times was the first American news organisation to file a lawsuit against OpenAI and its backer, Microsoft, asking the court to erase all large language models (LLMs), including ChatGPT, and all training datasets that rely on the publication’s copyrighted content. The prominent news media is alleging that their AI systems engaged in widescale copying, which is a violation of copyright law.

This high-profile case illustrates the broader legal challenges faced by AI companies. Authors, creators, and other copyright holders have initiated lawsuits to protect their works from being used without permission or compensation. As recently as 5 January 2024, authors Nicholas Basbanes and Nicholas Gauge filed a new complaint against OpenAI and its investor, Microsoft, alleging that their copyrighted works were used without authorisation to train their AI models, including ChatGPT. In the proposed class action complaint, filed in federal court in Manhattan, they charge the companies with copyright infringement for putting multiple works by the authors in the datasets used to train OpenAI’s GPT (LLM).

This lawsuit is among a series of legal cases filed by multiple writers and organisations, including well-known names like George R.R. Martin and Sarah Silverman, alleging that tech firms utilised their protected work to train AI systems without offering payment or compensation.

The results of these lawsuits could have significant implications for the growing AI industry, with tech companies openly warning that any adverse verdict could create considerable hurdles and uncertainty.

Ownership and Fair Use

Questions about who owns the outcome generated by AI systems—whether it is the companies and developers that design the systems or the end users who supply the prompts and inputs—are central to the ongoing debate.

The “fair use” doctrine, often cited by the United States Copyright Office (USCO), the United States Patent and Trademark Office (USPTO), and the federal courts, is another critical parameter codified in the 1976 Copyright Act. As a key limitation on the exclusive rights of copyright holders, it allows creators to build upon copyrighted work to produce new, ‘transformative” work. However, its application to AI-generated content with LLMs using massive datasets for training and fine-tuning still needs to be clarified and tested in courts.

Policy and Regulation

The USCO has initiated a project to investigate the copyright legal and policy challenges brought by AI. This involves evaluating the scope of copyright in works created by AI tools and the use of copyrighted content in training foundational and LLM-powered AI systems. This endeavour acknowledges the need for clarification and future regulatory adjustments to address the pressing issues at the intersection of AI and copyright law.

Industry Perspectives

Many stakeholders in the AI industry argue that training generative AI systems, including LLMs and other foundational models, on the large and diverse content available online, most of which is copyrighted, is the only realistic and cost-effective method to build them for the advancement of AI and the benefit of all. According to the Silicon Valley venture capital firm Andreessen Horowitz, extending strict copyright rules to AI models would potentially constitute an existential threat to the current AI industry.

The (bumpy) road ahead

The intersection of AI and copyright law is a complex issue with significant implications for innovation, legal liability, ownership rights, commercial interests, policy and regulation, consumer protection, and the future of the AI industry.

The AI sector in 2024 is at a crossroads with existing US copyright law, with obvious implications for the rest of the world. The legal system’s reaction to these challenges will be critical in striking the correct balance between preserving creators’ rights and promoting AI innovation and progress.

As lawsuits proceed and policymakers start engaging with these issues, the AI industry may face significant pressure to adapt, depending on the legal interpretations and policy decisions emerging from the ongoing processes. Ultimately, these legal fights could determine the winners and losers in the market and decide the immediate fate of the new technology.

By Mouloud Khelif
Consultant – Digital Strategy, Policy, Governance
Alumnus of the Executive Master INP (2021), Executive Certificate in SDG Investing (2022), enrolled in the Executive Master in International Relations (2024)
mouloud.khelif@graduateinstitute.ch
X/Twitter
Linkedin

0 Comments

Submit a Comment

Your email address will not be published.

Related articles
___

Newsletter
___

Receive our latest articles by subscribing to our newsletter!

Previous articles
___

Tags
___

Follow us
___

The views and opinions expressed in the articles are those of the authors and do not necessarily reflect the position of The Graduate Institute, Geneva.

SDG Portal
___

The Graduate Institute’s SDG Portal provides a window on our more than 150 IHEID experts, research projects, publications, courses, events and other activities connected to the 2030 Agenda for Sustainable Development.

Events
___

POW ADV 27.02.24

Advocacy and International Public Affairs Q&A
Programme Overview Webinar
Register here>

TW ADV 07.03.24

Are New Trends in International Advocacy and Public Affairs Really New?
Online conversation
Register here>

POW DPP - blog image

Programmes
___

Upskill series 2

Decoding Internet and AI Governance
Upskill Series - Executive Course
Apply now>