Abstract
Artificial Intelligence (AI) is widely regarded as a transformative technology with the potential to advance prosperity across fields such as healthcare, education, and climate change mitigation. By automating tasks, generating predictive insights, and supporting decision-making, AI offers unparalleled opportunities for economic and social progress. However, the rise of AI also presents profound risks, including large-scale social harms, ethical dilemmas, and the potential erosion of human control over autonomous systems (Bengio et al., 2023). These challenges are particularly acute in the Global South, where issues of political, social, and economic exclusion are magnified (Arun, 2020; Smith & Neupane, 2018). Research further highlight how the growing power of a handful of digital platforms shapes global geopolitics and economics, amplifying risks such as surveillance, behavioural manipulation, and societal inequities (Ezrachi & Stucke, 2022; Gawer & Bonina, 2024; Ithurbide, 2023; Lehdonvirta, 2024; Bonina et al., 2021; Ada Lovelace Institute, 2022). Addressing these risks requires regionally contextualized strategies that prioritize “inclusive solidarity”—a collaboration across diverse actors to mitigate harm and equitably share the benefits of AI.
Latin America’s AI governance has evolved in distinct waves (Aguerre, 2024), from initial national AI strategies emphasizing vision and global positioning, to recent shifts toward risk-based regulations influenced by the European AI Act, and most recently, to fostering regional cooperation mechanisms such as the Declaration of Cartagena de Indias (2024). These waves reflect a growing recognition that inclusive governance requires not only adapting global frameworks to local contexts but also strengthening collaboration among governments, firms, academia, and civil society. Understanding these dynamics is essential for addressing the unique challenges of the region while ensuring that AI governance frameworks foster equitable and sustainable innovation.
This paper examines the current landscape of AI regulation in Latin America, with a focus on understanding its main challenges and ways forward to address the regulatory divide and strengthen regional cooperation. Covering developments over the last five years, the study employs a mixed-methods approach, combining novel data sources—including the Global Data Barometer, the Index for Responsible AI, and UNESCO’s AI readiness assessments—with fieldwork conducted in Brazil, Chile, Colombia, and Uruguay. These empirical findings are enriched by insights from policy documents indexed in Overton and interviews with key stakeholders across the region.
Our analysis contributes to the growing discourse on responsible AI by elucidating pathways for Latin American countries to engage with global AI governance. The research highlights opportunities for the region to leverage the Brussels Effect (Bradford, 2020, 2023; Ylönen, 2024) in ways that shift power over data and AI systems towards societal benefit. Moreover, it underscores the necessity of fostering cross-national partnerships and collective regulatory frameworks that amplify Latin America’s voice in shaping global standards. The study offers actionable recommendations for bridging regulatory divides and promoting equitable and sustainable AI practices across the region.
A distinctive approach for this project lies on grasping the significance of AI regulation in Latin America. The project main contribution lies in recognising the imperative need to deploy Responsible AI globally. By understanding the Brussels Effect and its implications, it aims to delineate pathways for Latin America to actively work towards inclusive solidarities, by participating in shaping global AI standards, to understands its risks and actively contribute to its safety for and within the region. The research will shed light on how regional collaboration and partnerships can be fostered to bridge the regulatory divide as well as to inform policy.