|
The United Nations has just declared its intent to shape the future of artificial intelligence. In Resolution A/79/L.118, the General Assembly established two shiny new initiatives: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance.
On paper, this looks like a milestone—“the world comes together to govern AI.” In practice, it’s another example of the UN doing what it does best: launching elaborate processes that generate legitimacy without delivering power. The resolution is a masterclass in symbolism, not substance. The UN’s IPCC for AI—But Declawed The resolution’s crown jewel is the Independent International Scientific Panel on AI, a 40-member body tasked with producing annual reports on the risks and opportunities of artificial intelligence. Its design borrows directly from the IPCC, the climate panel that has spent decades distilling climate science into global consensus reports. But here’s the catch: the AI Panel is forbidden from making policy prescriptions. Its mandate is to be “policy-relevant but non-prescriptive.” Translation: it can tell you the house is on fire but cannot recommend calling the fire department. That’s not governance—that’s commentary. And then comes the real kicker: military AI is explicitly excluded. The deadliest and most destabilizing applications of AI—autonomous weapons, drone swarms, battlefield decision systems—are off the table. The UN has built a nuclear oversight body that refuses to talk about bombs. This exclusion alone guts the credibility of the entire exercise. The Global Dialogue: A Diplomatic Talk Shop The second half of the resolution is the Global Dialogue on AI Governance, a two-day annual meeting alternating between New York and Geneva. Governments, corporations, academics, and NGOs will come together to “share best practices,” “exchange views,” and draft “summaries.” It’s the classic UN formula: convene everyone, offend no one, produce glossy reports, and delay actual decisions. While Brussels is passing the AI Act, Washington is issuing executive orders, and Beijing is embedding AI into its governance and security systems, the UN is promising two days of speeches, panels, and PowerPoint presentations. This is process fetishism at its most refined—talking about talking, enshrined in diplomatic language, while the real regulatory battles are fought elsewhere. The North–South Rhetoric, Without Redistribution To its credit, the resolution acknowledges the growing divide between AI “haves” and “have-nots.” It stresses the need to close digital gaps, build capacity in developing countries, and ensure global representation. But there’s no serious funding mechanism. The entire system relies on voluntary contributions—from governments, tech giants, financial institutions, and philanthropic foundations. In practice, this means Big Tech can bankroll the very process that purports to scrutinize it. That is not multilateralism; it’s regulatory capture dressed up as inclusivity. The Wrong Venue for AI Governance It took member states eight months just to agree on the fine print of a single slice of the Global Digital Compact—the AI section. That pace isn’t diplomacy; it’s paralysis. And it’s a flashing red warning light: the UN is the wrong place to govern a technology that reinvents itself in weeks while diplomats argue for years. The problem isn’t just speed. It’s structure. The UN General Assembly thrives on consensus around universal problems—like climate change, where physics leaves no room for geopolitics. AI is the opposite. It’s not a shared challenge; it’s a strategic weapon. For Washington, AI means “trusted ecosystems” and export controls to blunt Beijing’s rise. For Beijing, it’s “digital sovereignty” and state power through surveillance and industrial dominance. These worldviews don’t meet in the middle—they clash. Trying to regulate AI at the UN is like trying to build a Formula 1 car by committee at a town hall meeting: the process is slow, the arguments endless, and the end product guaranteed to be obsolete before it ever leaves the garage. Put the U.S. and China in the same UN room, add 190 other states and a phalanx of Big Tech lobbyists, and the outcome is predictable: not rules, not enforcement, not leadership—just lowest-common-denominator statements that disguise a contest for power as consensus. The American Retreat, the Chinese Opportunity The United States has largely stepped back from the UN as a serious arena for tech governance. Successive administrations have been skeptical of ceding authority to multilateral bodies, preferring coalitions of like-minded states (the OECD, the G7, or the nascent US–EU Trade and Technology Council). Washington will not allow the General Assembly to dictate rules that bind Silicon Valley. China, by contrast, thrives in UN processes. It has mastered the art of incremental influence: securing leadership positions in technical agencies, shaping language in resolutions, and embedding its preferred concepts—“digital sovereignty,” “development first,” “non-interference”—into the multilateral bloodstream. Resolution A/79/L.118 is fertile ground for this. By excluding military AI, the resolution sidesteps the area where Beijing faces the most scrutiny. By emphasizing capacity-building and equity, it opens the door for China to present itself as the champion of the Global South, offering AI partnerships through its Belt and Road framework. Meanwhile, the voluntary funding model leaves plenty of room for Chinese-backed foundations and companies to bankroll participation, especially from developing states. The result? The UN’s AI process risks becoming a stage where China amplifies its narrative of “responsible, state-led AI,” while the United States watches from the sidelines. Why Symbolism Still Matters And yet, dismissing the resolution outright would be a mistake. Symbolism has power. By creating a standing Panel and a recurring Dialogue, the UN has institutionalized AI governance in the international system. Even if the reports are toothless, they will be quoted in policy debates, cited by activists, and featured in headlines. Norms often start as soft, non-binding ideas before hardening into law. The IPCC was derided as toothless when it was created in 1988. Three decades later, its reports drive climate policy worldwide. Something similar could happen with AI—though the UN’s starting point is far weaker. The Bottom Line Resolution A/79/L.118 is visionary in appearance, hollow in substance. It builds a global AI “talking machine”—a panel to pontificate, a dialogue to deliberate—while dodging the most urgent issues: military AI, binding standards, and sustainable funding. The UN wants to prove it is still relevant in the digital age. But the reality is stark: the future of AI governance will not be decided in the General Assembly. It will be decided in Washington, Beijing, Brussels, and in the boardrooms of a handful of companies that control the technology. If the world is looking for a serious AI regulator, this is not it. If it’s looking for a stage where great power rivalry, corporate lobbying, and global South frustrations collide—this is exactly it. ------------------------------ The timeline of rollout for the resolution:
Comments are closed.
|
Categories
All
|