Territorial Control of Data and Compute in Generative AI: A New Paradigm of Competitive Advantage
The rapid advancement of generative artificial intelligence (AI) is increasingly shaped by control over two critical inputs: high-quality data and the compute infrastructure required to train and update large-scale model weights. This paper argues that these inputs – rather than algorithmic talent or novel architectures alone – have become the decisive strategic assets in generative AI, creating steep structural barriers to entry. We examine who controls these resources and how this control is territorially distributed across countries. Building on literature in industrial organization, competition policy, and international political economy, we highlight a gap in existing research: insufficient attention to the territorial concentration of “model-weight-setting” capacity, i.e. the ability to train cutting-edge foundation models. We find that the capacity to set foundation model weights is overwhelmingly concentrated in a few firms and regions, reinforcing market concentration and limiting the AI development sovereignty of most countries. While innovations in model architectures and efficiency (illustrated by the DeepSeek case) can reduce compute requirements at the margin, they do not eliminate the scale advantages conferred by privileged access to massive proprietary datasets and nation-scale computing clusters. The paper concludes with implications for competition and regulation, arguing that the territorial control of data and compute resources is a fundamental structural challenge for both market competition and global equity in AI.