Search for a command to run...
We are repeatedly told that artificial intelligence (AI) systems cannot be considered human inventors or authors.The transatlantic (EU and US) debate is fixated on the output: who ''created'' it, who ''owns'' it, and whether it is ''original''.Meanwhile, the real contest is upstream: who gets to ingest protected content at scale, under what conditions, and with what disclosure.This is where the EU and the US diverge.The EU's posture is structural and ex ante; it seeks to regulate inputs through textand-data-mining rules, opt-out mechanisms, and transparency requirements.The US posture is doctrinal and ex post; it guides courts to stretch fair use and other IP doctrines to accommodate industrial-scale training while treating market reconfiguration as a secondary issue.Each approach has its own internal logic.Each also has a blind spot: market power.In our view, in the context of generative AI (GenAI), the decisive legal question is no longer whether an output resembles a protected work, but whether the training pipeline constitutes a private gatekeeper over culture and knowledge.When a small set of actors can aggregate the world's creative corpus, combine it with computing, and then distribute synthetic substitutes at scale, the familiar IP story about rewarding creators and inventors becomes incomplete.The risk is not only an uncompensated extraction.It is the emergence of an ''input monopoly'' that reshapes markets by controlling data, computing, and distribution.We will illustrate our argument with a few examples.In Bartz v. Anthropic and Kadrey v. Meta, the courts held that copying books to train an LLM may qualify as fair use, even when the underlying corpora include works sourced from shadow
Published in: GRURRR. Gewerblicher Rechtsschutz und Urheberrecht, Rechtsprechungs-Report/GRUR-DVD/GRUR-CD/IIC/Gewerblicher Rechtsschutz und Urheberrecht/Gewerblicher Rechtsschutz und Urheberrecht. Internationaler Teil
Volume 57, Issue 1, pp. 1-4