Who Sets the Rules? AI Policy in an Era of Geopolitical Shift

The recent AI Summit in France underscored a pressing set of questions: who makes the rules for artificial intelligence, and does a rules-based framework even make sense in a world of shifting political and economic landscapes? The world’s two most influential regions, the U.S. and Europe, are prioritizing vastly different approaches: the U.S. remains largely market-driven and resistant to overarching regulations, while Europe emphasizes protectionism and regional control. As a result, American technology companies are facing increasing regulatory constraints in Europe. It is also important to recognize the roles of China and India—two of the world’s largest economies and most populous nations. China has been rapidly expanding its AI capabilities and pushing innovation beyond traditional frameworks,backed by state-led investments and vast data resources (China AI Development); the country has secured over $26 billion in AI funding in recent years. India is also emerging as a global AI hub, with over 900 million internet users and a rapidly growing AI startup ecosystem supported by initiatives like Digital India and AI for All (India AI Growth).

The Political Fault Lines of AI Regulation

In his recent address at the Paris AI Summit, U.S. Vice President JD Vance emphasized the U.S. administration’s commitment to maintaining American leadership in AI. He cautioned against excessive regulation, stating that it could “kill a transformative industry just as it’s taking off.” [1] Vance advocated for a pro-growth approach, highlighting the need for AI to remain free from ideological bias and not become an excuse for authoritarian censorship. He also underscored the importance of AI for job creation within the United States.

His stance reflects a broader debate on AI policy, where the U.S. favors a more laissez-faire approach, while Europe inclines towards stricter regulatory frameworks. Meanwhile, China is developing AI systems that defy Western regulatory paradigms altogether. India is attempting to balance regulation with growth but faces challenges in aligning its AI policies with global standards while fostering homegrown AI leadership. These diverging approaches and views underscore the challenges of establishing universal AI policy structures.

Claire Melamed, CEO of the Global Partnership for Sustainable Development Data, recently argued that AI needs “rules of the road” to ensure fair competition, prevent existential risks, and create a level playing field for innovation. While this vision of a collaborative framework is noble, it largely ignores the political realities shaping AI policy today. Different nations and blocs have starkly different motivations: the European Union sees AI regulation as a means of controlling the dominance of American tech giants while protecting European businesses,[2] while the current administration in the United States views regulation as a potential straitjacket on innovation, prioritizing economic growth and national security instead.

A prime example of the ongoing friction between the European and American approaches is the EuroStack initiative, a European Digital Industrial Policy launched in 2024, which seeks to strengthen European autonomy and sovereignty by building regionally controlled digital infrastructures[3]. While this effort aligns with Europe’s long-standing ambition to reduce reliance on U.S. tech companies, it risks further fragmenting the AI ecosystem and excluding perspectives from Latin America, Africa, and Asia, where AI adoption faces unique challenges due to economic constraints, lack of infrastructure, and reliance on external technologies. As AI dominance grows, these regions risk being sidelined in global AI policymaking, further entrenching disparities in technology access and governance.

India, as one of the fastest-growing AI markets, faces a different challenge—ensuring regulatory flexibility while competing with the AI dominance of the U.S., China, and Europe. With a strong IT sector and a thriving AI startup scene, India seeks to assert its independence in AI policy and avoid over-reliance on foreign AI infrastructure. The Indian government has prioritized AI innovation through initiatives like the National AI Strategy, which promotes ethical AI development while allowing flexibility for industry-led advancements. However, India’s lack of a unified regulatory framework for AI has led to uncertainties in data governance, AI ethics, and cross-border AI collaboration.

The problem is not just about setting rules, but also about determining who has the authority to enforce them. If AI policy is left to individual nations, it will result in fragmented regulatory approaches that ultimately serve national, not global, interests; in this scenario, AI could end up being governed more by economic and geopolitical rivalries than by shared commitments to the public good. On the other hand, if an international body similar to the United Nations (UN)[4] or another multilateral institution were to take charge, this would require compliance from nations that have little incentive to submit to external oversight. Given these realities, global AI policy is unlikely to materialize in any meaningful way beyond loose frameworks and diplomatic rhetoric.

Policy and Implementation: Bridging AI Regulation Strategies

How can AI regulations remain open and flexible as technology develops? Much like how each country has its own visa and travel regulations—where visitors must adhere to local laws while abroad—AI governance will inevitably vary across jurisdictions. However, AI policies can benefit from global dialogue without assuming a single model will fit all, in the same way that international conferences provide a platform for nations to exchange best practices, discuss emerging trends, and showcase innovation. For example, forums like the OECD AI Principles and UNESCO’s AI Ethics Guidelinesprovide spaces for international cooperation without enforcing rigid, uniform AI regulations across all nations.

India is an example of a country that is navigating this balance. While the U.S. and Europe have taken distinct regulatory stances, India has pursued a more adaptive strategy—fostering AI growth while keeping regulations fluid to accommodate evolving technologies. The Indian government’s focus on AI in critical sectors such as healthcare, agriculture, and digital finance highlights how AI policies can be tailored to national priorities rather than imposed through rigid frameworks.

Perhaps a more programmatic approach should focus on fostering mutual understandings between governments, corporations, and research institutions in critical areas such as ethical AI development, data sharing, and security protocols. AI policies must consider both national interests and global cooperation, protecting fundamental rights and security concerns while ensuring that regulations do not stifle innovation. While formal rules are difficult to enforce across borders, mutual understandings can shape behavior through incentives rather than mandates. This is particularly relevant when it comes to research collaborations, medical discoveries, and the security applications of AI. If AI is to advance human progress, it should not be shackled by political divisions that prevent life-saving technologies from crossing borders. The way Brexit led to individual deals between the UK and EU nations rather than a singular regulatory framework is a broader example which highlights the importance of adaptive, responsive policies that allow flexibility while maintaining shared standards.

AI Frameworks as Public Participation, Not Just Policy

Ultimately, the debate over AI regulations should not be confined to governments and corporations. What is becoming obvious across different political parties is that conceptions of democracy, governance, and rules seem to be defined by geopolitical and economic realities and differ across the globe. These differences shape how AI policies are perceived and implemented, reinforcing the need for flexible and inclusive governance models. So how do we ensure that AI-focused collaboration serves the specific needs of each sector rather than imposing a one-size-fits-all policy?

It is essential not to assume that technology companies, government bodies, academia, and the public at large share the same perspective. AI is a transformative force that is already affecting every aspect of society and will continue to grow in influence, and if it is truly meant to serve humanity, everyone must have a voice in shaping its trajectory. Participation in AI policy by industry experts, civil society, and the general public, through multi-stakeholder engagements, citizen assemblies,  public consultations, and transparency in AI policymaking, should be prioritized alongside regulatory debates.

Rather than fixating on rigid, top-down rule-making, we should be asking: How do we ensure that AI policies are both implementable and adaptable? How do we create systems that enable both technological progress and accountability, whether locally, nationally, or globally? Perhaps most importantly, how do we create a framework model that adapts to AI’s rapid evolution without stifling its potential?

In a world that is increasingly fractured along political and technological lines, perhaps the most important task is to foster an ongoing, dynamic process of negotiation that keeps AI aligned with the broader public interest and creates a policy model that is flexible, participatory, and responsive to the realities of our time.

[1] https://www.aicommission.org/2025/02/read-jd-vances-full-speech-on-ai-and-the-eu/

[2] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[3] https://euro-stack.eu/a-pitch-paper/

[4] https://www.un.org/en/ai-and-global-governance