AI is moving power from governments to tech companies


Debates over the governance of artificial intelligence tend to assume that it will be important and transformative across many areas of human endeavor. Yet, the question of how those benefits and risks will be distributed — who will win and who will lose — is less commonly articulated.

Techno-utopians enthuse that everyone will win: The pie will be bigger; the rising tide will lift all boats. Concerns about inequality or the environmental impact of AI are batted aside with the promise that AI itself will solve such problems.

Others, including a surprising fraction of those developing AI systems themselves, warn of darker, dystopian futures in which AI turns on humanity, either through misalignment of objectives or the emergence of a superintelligence that regards its creators in the way that we might regard lesser creatures such as dogs — or ants. Everyone loses.

Between the extremes are those trying to think through where the gains and losses of AI will fall. In realist circles, it has become common to speak of AI in the language of an arms race, a comfortingly familiar frame that pits the West against a rising China

AI is shifting economic and, increasingly, political power away from governments.”

An alternative framing adopts a North-South axis, noting the 750 million people without stable electricity and the more than 2 billion unconnected to the internet. For all the worry about misuse of AI, many in developing countries are more concerned about missed uses and being left behind.

Yet, the most important divide may not be East-West or North-South but public-private. AI is shifting economic and, increasingly, political power away from governments. 

The nature and scale of the power wielded by today’s tech giants rivals the role occupied by the East India Company in the early 19th century, when it controlled half of global trade and had its own army. Today’s tech behemoths may lack that measure of economic or military power, but their global cultural and political influence is arguably greater. These “silicon sovereigns” set rules, adjudicate disputes, police speech, shape labor markets and elections — functions once associated primarily with states.

Governments have struggled to keep pace. China has shown that a determined state can reassert control, cracking down on major technology firms and restructuring corporate power — though this may amount to replacing private dominance with party oversight. The European Union took a bold step with its AI Act, yet early signs of implementation strain and quiet buyer’s remorse suggest ambivalence about the economic costs. The U.S., by contrast, has been unwilling or unable to regulate at the federal level, with efforts to preempt or prohibit more ambitious AI legislation at the state level.

The hesitation is understandable. AI is associated with economic growth, national competitiveness, and military advantage. Politicians worry that aggressive regulation will stifle innovation or drive it elsewhere. Meanwhile, technology companies command significant lobbying resources and enjoy deep integration into the daily lives of voters, even as they deploy tools that enhance surveillance, monetize human attention, and replace human labor.

So, what is to be done? If companies cannot be trusted to self-regulate, if governments are unwilling to legislate, and if international organizations are unable to do more than coordinate — who or what might help mitigate the risks and more evenly distribute the benefits of AI?

The first answer is, of course, us. Users can choose not to support companies that ignore safety or exacerbate inequality. The problem is that individual users have trivially little leverage over companies whose business model is premised in part on hiding that lack of agency from consumers. The tragedy of AI governance lies in that inverse relationship between leverage and interest: Users have interest but no leverage; tech companies have leverage but no interest in constraining their own behavior if it means thereby limiting their profits. 

Just as organized labor offered glimmers of hope in increasing workers’ bargaining power, organized users might have a greater say in how technology is developed and deployed. Global privacy movements, for example, shifted markets at least modestly. It is conceivable that similar norms might emerge in the AI space, perhaps along the lines of “responsible” AI that is more trustworthy and less prone to hallucinations, or more “open” in the sense of greater transparency as to how decisions are made and how models are trained.

Another form of transparency involves the costs of AI, notably its environmental impact. Various tech companies — and some countries — have announced that their investments in AI mean they are giving up on climate targets. More information about the costs of AI, either through moves to subscription models, or at least revealing the electricity and water consumed when using the latest AI systems, might influence user — and therefore corporate — behavior.

The first true AI emergency may not be an existential catastrophe but the steady hollowing out of public authority.”

Market mechanisms, however, will not be enough. In the wake of the global financial crisis of 2007–08, one of the lessons learned was that if certain banks were “too big to fail,” then it meant that they were too big in the first place. There is a strong argument that tech companies — or tech entrepreneurs — that are too big to regulate are too big, period. There have, of course, been efforts to break up those companies. The U.S. Justice Department is currently suing Google and Apple, while the Federal Trade Commission has ongoing actions against Amazon, having unsuccessfully brought actions against Microsoft and Meta.

The EU has linked size with more elaborate obligations and reporting requirements for “gatekeepers” under the Digital Markets Act, and “very large” online platforms and search engines under the Digital Services Act. Only China, however, has successfully broken up tech companies in a purge lasting from 2020 to 2023, wiping trillions of dollars off the share value of those companies, with Alibaba divided into six new entities. These were costs that Beijing was willing to bear, but at which Washington or Brussels might balk, particularly given President Trump’s new chumminess with the tech elite.

An alternative to divestiture is nationalization, with states moving from regulation to outright control, treating AI infrastructure as public utilities or national assets essential to national security or economic stability. For now, there is no appetite for confrontation with technology companies, a timidity reinforced by the fear of holding back innovation or falling behind geopolitical rivals, or the more mundane concerns of running for political office in a social media age.

International institutions face an even steeper challenge. In the 1950s, nuclear governance emerged against the backdrop of unmistakable devastation and a clear existential threat. AI presents no such singular moment of reckoning. Its harms are diffuse: disinformation, labor displacement, surveillance, market concentration. Without a catalytic crisis, coordination remains elusive.

It is possible that fears are overstated. AI may deliver productivity gains and scientific breakthroughs that justify its risks. But even if catastrophic scenarios never materialize, a quieter transformation is already underway. Sovereignty — understood as the authority to set rules, allocate resources, and shape collective futures — is migrating from public institutions to private actors.

The danger is not that machines will rule humanity. It is that those who control them increasingly shape the conditions under which humanity governs itself. If states prove unwilling or unable to reassert meaningful oversight, and if global institutions remain reactive rather than proactive, the first true AI emergency may not be an existential catastrophe but the steady hollowing out of public authority.

The question is not whether AI will be governed. It is by whom.



Source link

Leave a Reply

Translate »
Share via
Copy link