|
Imagine the Internet as a vast, invisible superhighway stretching across the globe. Data streams hurtle along like cars, trucks, and motorcycles, carrying emails, videos, financial transactions, and medical records with astonishing speed. The lanes are invisible, but they are meticulously maintained by interconnection agreements between Internet Service Providers (ISPs) and content platforms.
For decades, these agreements have operated quietly, like skilled traffic controllers behind the scenes, ensuring that billions of “vehicles” of information reach their destinations smoothly and efficiently. They are not flashy, and they rarely make headlines, but without them, the entire system would seize up. They are the arteries of the Internet, carrying lifeblood to every corner of the digital world. Now, imagine a bureaucrat stepping onto that highway, clipboard in hand, telling every driver that any disagreement—any minor fender-bender—must stop the traffic and be resolved through a mandated arbitration lane. Every car, every packet of data, now has to slow down, wait in line, and pass through a slow, opaque system to determine what is “fair.” What was once an agile, self-regulating ecosystem, capable of rerouting instantly around congestion, risks turning into a traffic jam stretching for miles. This is precisely the path Europe is quietly exploring. Through the Digital Networks Act (DNA), the European Commission is considering a framework that would impose mandatory dispute resolution for interconnection agreements. On paper, it reads like a safety measure: if two parties cannot agree, a neutral arbitrator steps in. But in reality, it’s more akin to replacing skilled traffic engineers with bureaucrats who have never driven on the highway themselves, or worse, forcing toll booths into every lane of free-flowing traffic. One misjudgment in that arbitration lane can create ripple effects across the Internet, slowing the flow, raising costs, and undermining the agility that has allowed the Internet to grow into the global powerhouse it is today. Internet interconnection is the foundation of the global Internet’s functioning and survivability. It allows networks—large and small, public and private—to exchange traffic seamlessly, ensuring that data can flow efficiently from any user to any destination across the world. Without interconnection agreements between Internet Service Providers (ISPs), content delivery networks (CDNs), and other operators, the Internet would fragment into isolated networks, undermining its defining feature: universal connectivity. From a user’s point of view, interconnection is what keeps the Internet fast, reliable, and accessible. Every time someone streams a video, makes a video call, or loads a website from across the world, interconnection is what makes that experience smooth and instant. It’s the invisible system that allows different networks to talk to one another—no matter who owns them or where they are located. When networks interconnect freely and competitively, users enjoy real, tangible benefits. Data travels more efficiently, which means faster load times, fewer buffering delays, and more consistent service quality. Because providers don’t need to rely on a single path to reach their destinations, competition helps drive down costs and encourages companies to keep improving their services. This open exchange also sparks innovation: new apps, content platforms, and cloud services can emerge and scale quickly, knowing they’ll be able to reach users everywhere without artificial barriers. Strong interconnection also makes the Internet more resilient. If one network experiences a failure or cyberattack, traffic can automatically reroute through other connections, keeping services online and minimizing disruption. This redundancy is vital for maintaining trust—especially as more of daily life, from banking to education to entertainment, depends on uninterrupted connectivity. However, the introduction of a centralized dispute resolution mechanism for interconnection could threaten this balance. While ostensibly designed to resolve disagreements between network operators, such a mechanism risks introducing regulatory overreach and bureaucratic interference into what has historically been a flexible, market-driven system. Imposing top-down arbitration on interconnection arrangements could have several harmful effects:
The Copyright experience To understand why this is dangerous, consider the analogy with copyright disputes between big tech platforms and news publishers. In France, Google was fined €250 million for failing to negotiate in good faith under the EU copyright directive. Behind closed doors, publishers and platforms hashed out deals that were then enforced by regulators—a process opaque to the public and often skewed in favor of the larger, more powerful players. But interconnection is not copyright. For one, transparency is almost nonexistent. Copyright negotiations happen behind closed doors; the public only sees the final verdict. Who decides what is “fair”? Who sets the fees? For interconnection, secrecy would be catastrophic. ISPs and content networks rely on trust, predictability, and public scrutiny to maintain balance and efficiency. Introducing a black-box arbitration system injects uncertainty into every connection. Second, the power dynamics are fundamentally different. Copyright disputes pit a dominant tech platform against equally dominant publishers. Interconnection, by contrast, is a complex ecosystem of networks, each carrying vast quantities of traffic. Misjudging network capacity or fees does not just cost money—it affects every user’s experience. Arbitrators without deep technical understanding risk skewed outcomes that could inadvertently favor one party over another. Third, and most importantly, copyright is not interconnection. The two operate in fundamentally different spheres. Copyright disputes are about who owns creative content and how that content can be licensed or monetized. They revolve around access to works—music, films, articles, and software—and how creators are compensated for their use. Interconnection, on the other hand, deals with the infrastructure that makes all digital exchange possible. It governs how networks exchange data—through peering, routing, and transit agreements—and ensures that information can move seamlessly across the global Internet. Treating these two domains as if they were the same is a category error. Applying copyright logic to interconnection would be like using traffic court rules to manage a city’s water supply: both are essential systems, but they follow entirely different principles. Copyright can tolerate negotiation delays and exclusive deals because it concerns discretionary access to creative works. Interconnection cannot. If data flow between networks is interrupted, the consequences ripple instantly—websites become unreachable, services fail, and users lose connectivity. Interconnection is not about ownership or exclusivity; it’s about continuity and interoperability. Its value lies in openness and efficiency, not in controlling or restricting flow. Confusing it with copyright-style disputes risks undermining the very foundation of the Internet’s reliability and universality. Think of peering agreements as the express lanes of the Internet. They allow traffic to flow directly between networks, avoiding expensive detours and congestion on third-party networks. These arrangements are negotiated voluntarily, flexibly, and continuously adapted to traffic patterns. Introducing mandatory arbitration is like taking those express lanes and turning them into toll roads where every car must stop and pay a regulator-determined fee before continuing. The natural flow is disrupted, costs rise, and innovation slows. The risks extend far beyond Europe. Policymakers in Brazil and other emerging markets are watching closely. If this model spreads, the result could be a global network choked by bureaucracy, where every disagreement over interconnection—minor or major—must be filtered through opaque dispute resolution channels. Local traffic that once moved freely could now be delayed, rerouted, or taxed, with consequences for consumers, businesses, and start-ups. And then…there are IXPs At the heart of the Internet lie Internet Exchange Points (IXPs)—the physical and operational hubs where networks meet, exchange data, and keep traffic flowing smoothly. They are the interchanges that make the global Internet local, reducing latency, lowering bandwidth costs, and improving performance for everyone from start-ups to streaming platforms. In essence, IXPs are what turn the Internet from a patchwork of private networks into a shared, resilient ecosystem. Introducing mandatory dispute resolution mechanisms into this delicate balance risks distorting the very incentives that make IXPs work. These exchanges thrive on voluntary, trust-based collaboration—networks choose to peer because it’s mutually beneficial, efficient, and cost-effective. When interconnection becomes subject to formal arbitration or regulatory intervention, that cooperative spirit gives way to caution. Networks may hesitate to enter or maintain peering agreements, fearing legal entanglements or regulatory overreach. Smaller players—local ISPs, community networks, and start-ups—would be hit hardest, as they often lack the legal and financial resources to navigate such frameworks. The consequences would extend far beyond administrative friction. IXPs rely on dense participation to deliver benefits at scale: the more networks that connect, the better the performance and resilience for everyone. If participation drops, data that once traveled across town may be forced to detour across continents, increasing latency, costs, and vulnerability. The result is a weaker, less efficient Internet—precisely the opposite of what policymakers intend. Europe’s proposal may appear to offer fairness or safety through arbitration, but in practice, it risks creating a toll booth on the digital superhighway—a mechanism to extract fees or impose conditions under the guise of dispute settlement. What has always been a fast, adaptive, and self-regulating system could become bureaucratic and sluggish. Free-flowing data would turn into a controlled flow, and innovation—dependent on low barriers and high connectivity—would slow to a crawl. Countries that have embraced open, competitive interconnection—such as Brazil, Kenya, and the Netherlands—show what’s at stake. Their robust IXPs have driven local content growth, supported digital entrepreneurship, and strengthened national resilience. Undermining those dynamics through heavy-handed dispute systems risks eroding years of progress. The world is watching. If Europe moves forward and others follow, the invisible highways of the Internet—once open, fast, and decentralized—may soon be clogged with red tape. And unlike a traffic jam on a city road, this gridlock would have global consequences, threatening not just efficiency but the very character of the Internet as a borderless, interconnected commons. How Multipolar Power Is Dismantling the Open Internet and Replacing It with Competing Digital Sovereignties The 80th session of the United Nations General Assembly was not just another parade of world leaders. It was a eulogy — for the open, borderless Internet that once promised to knit humanity into a single digital commons. In New York, the world’s major powers — led by President Donald Trump, President Luiz Inácio Lula da Silva, and Premier Li Qiang — did not merely disagree on who should govern the Internet. They converged, in different languages and logics, on a single premise: the open Internet is politically incompatible with the emerging multipolar world.
That convergence, subtle but unmistakable, marked a geopolitical rupture. The ideal of a seamless global network — interoperable, rights-respecting, and governed through multistakeholder consensus — is giving way to a patchwork of sovereign digital domains. What began as technical divergence has hardened into political design. The new Internet is being built not on open code but on competing commands. This is not simply the “splinternet” analysts warned about two decades ago. It is something more deliberate: a geopolitical veto on the very idea of a shared digital space. At UNGA80, the world’s leading powers effectively agreed that the Internet would henceforth serve nations, not citizens. And in doing so, they may have set the stage for the most profound reordering of the digital age since the network’s creation. From the Multistakeholder Dream to the Multipolar Reality The Internet’s founding governance vision — open, meritocratic, and transnational — was one of the last utopian projects of the late twentieth century. Built during the unipolar moment of U.S. hegemony, it reflected a faith that technical coordination could transcend politics. Engineers, civil society, and corporations all shared seats at the table. States, for a time, were guests in a system whose legitimacy derived from consensus rather than coercion. That fragile equilibrium depended on a single assumption: that the world itself would remain sufficiently unified to sustain a shared governance space. The rise of multipolarity has shattered that foundation. President Lula da Silva captured this new world plainly: the twenty-first century, he declared, “will be increasingly multipolar” and “must be multilateral.” His words were not diplomatic boilerplate. They were a declaration of independence from the Western liberal order that had dominated Internet governance since the 1990s. For Lula, the Internet must no longer be a “land of lawlessness.” It must be governed — by states, through treaties, and ultimately through the United Nations’ Global Digital Compact. That reference is crucial. By invoking the Global Digital Compact rather than the World Summit on the Information Society (WSIS)— the process that originally defined the Internet as a multistakeholder domain shared among governments, industry, and civil society — Lula effectively affirmed that the Internet has become a terrain of state sovereignty, not of citizens’ rights. WSIS symbolized the belief that governance could be distributed across sectors; the Global Digital Compact reasserts that it should be consolidated under governments. In other words, what began as a commons to be coordinated is now a territory to be administered. In principle, such calls for multilateral oversight sound like reforms toward fairness. In practice, they introduce incompatibility into the Internet’s very architecture. A network designed for universal standards and voluntary coordination cannot easily accommodate competing legal sovereignties. The technical commons becomes a regulatory battlefield, while the code that once embodied universality now encodes divergence. Challenging this vision of reformed multilateralism is President Trump’s resolute rejection of globalism. His championing of absolute sovereignty and scorn for the UN and its frameworks is an attempt by a fading hegemonic power to resist the shift to multipolarity by prioritizing domestic law above all shared governance. The principle that "every sovereign nation must have the right to control their own borders" translates into a digital isolationism that refuses to align its technical or legal frameworks with either UN bodies or competing poles. This resistance is further entrenched by the United States’ strategic abdication of its international human rights commitments, a withdrawal that removes one of the core normative anchors that once supported the vision of an open internet. Together, these choices ensure fragmentation, as the world’s largest economy refuses to conform, shattering the possibility of a shared internet legal space. Meanwhile, Premier Li Qiang’s focus on the Global Development Initiative (GDI) provides the blueprint for the third major digital pole. China’s strategy of strengthening the "UN-centered international system" through state-led cooperation and technology transfer is designed to export its own model of digital governance and infrastructure standards. This approach subordinates the internet’s bottom-up coordination—where engineers and innovators create open, meritocratic protocols—to top-down, state-driven mandates, ensuring its infrastructure supports national economic and geopolitical interests. Three leaders, three philosophies: populist sovereignty, reformist multilateralism, and technocratic state capitalism. But together they articulate a shared reality — that the open Internet cannot survive a world organized around geopolitical blocs. The New Concert of Networks To understand this moment, history offers an uncanny parallel. After the Napoleonic Wars, the great powers of Europe established the Concert of Europe — a balance-of-power arrangement designed to prevent any one state from dominating the continent. For four decades, it maintained a fragile peace through negotiation and restraint. But it also suppressed revolution, stifled liberalism, and prioritized order over freedom. The digital world is entering its own Concert of Powers. The U.S., China, and the emerging Global South — led by Brazil, India, and others — as well as the European Union, are all constructing spheres of digital influence that mirror the strategic geography of the nineteenth century. Instead of borders drawn on maps, they are traced through data cables, standards committees, and content regulations. The battlefield is the network itself. Each bloc seeks not only to protect its citizens or industries but to shape the underlying architecture of the Internet in its image. The result is a form of digital mercantilism — the weaponization of interconnection for national advantage. Where the nineteenth century had colonies, the twenty-first has digital dependencies. The “splinternet” is therefore not a technical failure but a strategic design. Russia’s “Runet,” China’s Great Firewall, the European Union’s data localization and Digital Services Act, the U.S. export controls on semiconductors — these are not isolated phenomena. They are the infrastructure of a multipolar Internet, one that mirrors the “New Cold War” without the ideological binaries. In this system, interoperability is no longer a given but a privilege. Nations decide which packets cross their borders, which algorithms are permissible, which data may be stored abroad. The universal protocols that once defined the network are being replaced by political protocols — authentication systems for sovereignty itself. Digital Iron Curtains The Cold War had its Iron Curtain, dividing East from West by ideology and arms. The twenty-first century’s Digital Iron Curtain is subtler but no less consequential. It is not a single wall but a lattice of legal, technical, and commercial enclosures. China’s model of “cyber sovereignty” demonstrates how digital borders can be both porous and absolute. Within its Great Firewall, domestic platforms like WeChat and Baidu flourish, protected from competition and surveillance-free only in name. Outside it, Chinese technology companies expand globally, embedding Beijing’s technical standards into the infrastructure of developing nations. The Belt and Road Initiative has become, in part, a Digital Silk Road — exporting fiber, satellites, and governance norms together. Russia’s approach is more defensive, but no less strategic. The Kremlin views the Internet simultaneously as weapon and vulnerability. It conducts asymmetric operations abroad while hardening its domestic network through deep packet inspection and mandatory data localization. Its goal is not isolation but selective interdependence — the ability to disconnect at will while retaining the benefits of global connectivity. Whether Russia can achieve this goal is questionable; its intentions though are unmistakable. Even democratic powers have embraced their own versions of digital sovereignty. The European Union’s regulatory assertiveness — from the GDPR to AI governance frameworks — reflects an effort to reclaim normative leadership. Yet it also contributes to fragmentation. When every bloc defines “trust” in its own legal terms, the shared space for innovation and cross-border rights erodes. And the United States? Once the Internet’s chief architect and evangelist, it now finds itself both defender and divider. The United States, historically, has been the Internet’s greatest champion. It has long promoted a global, open, and interoperable network, governed by multistakeholder institutions like the Internet Corporation for Assigned Names and Numbers (ICANN). Fragmentation, for Washington, has been seen not just as a technical failure but a geopolitical threat—undermining US values, economic interests, and strategic primacy. Yet America’s recent behavior has complicated that stance. During President Trump’s first term, American policy veered toward techno-nationalism. The “Clean Network” initiative, executive orders targeting TikTok and Huawei, efforts to onshore tech supply chains and the broader effort to exclude Chinese platforms from US infrastructure all signaled a shift toward selective decoupling.And, today, efforts to force TikTok into American hands suggest a shift in strategy—but the underlying instinct remains: security first, even at the cost of global interoperability. Additionally, the “America First” doctrine continues to cast doubt on whether the US will prioritize global cooperation over domestic advantage. In other words, the U.S. isn’t pushing for fragmentation, but it’s increasingly accommodating it. The Multipolar Veto At UNGA80, these competing doctrines did not collide; they converged. For the first time, the great powers agreed — tacitly but unmistakably — that global interdependence must be subordinated to national control. In practice, this is the geopolitical veto: a collective but fragmented assertion of sovereignty that overrides the Internet’s founding principle of universal interoperability. Each power, by insisting on its own digital norms, renders universal ones impossible. The veto operates not through UN resolutions but through standards bodies, trade restrictions, and infrastructure investment. Lula’s push for a UN-centered Global Digital Compact represents one form of the veto — governance by intergovernmental consensus that excludes non-state actors. Trump’s rejection of multilateral frameworks represents another — governance by unilateral withdrawal. Li Qiang’s advocacy of state-led development is a third — governance through infrastructural capture. Together, they amount to a systemic denial of the multistakeholder ideal that sustained the Internet’s legitimacy for three decades. The genius of the multistakeholder model was its insistence on consensus and its formal empowerment of civil society to advocate for universal rights—privacy, free expression, and access—against the combined power of states and corporations. In a multipolar world, that model dies because its necessary condition, a shared global consensus on digital governance, has vanished. The assertive sovereignty preached by President Trump eliminates the necessary foundation for global rights advocacy. By rejecting "globalism," he effectively rejects the international human rights instruments used by civil society to hold domestic power accountable. In the new environment, human rights become entirely fungible, subordinate to whichever national law is supreme in that pole. Similarly, the emergence of competing governance poles, as championed by Lula and Li Qiang, effectively silences the independent civil society voice. When Lula advocates for multilateral governance and Li Qiang promotes state-led development, they are pushing governance toward intergovernmental committees. In these bodies, the independent, often critical, voice of civil society is systematically marginalized, replaced by state-appointed delegates or relegated to advisory roles. The focus shifts from protecting the rights of individuals to serving the strategic priorities of the nation-state or geopolitical bloc. For civil society, the turn toward multipolarity presents an existential crisis: they must now navigate multiple, often contradictory, pole-specific standards for human rights, data privacy, and censorship. The collective, global power of digital rights advocates to organize across borders—a power fundamental to the original open internet—is being destroyed by the lack of a common legal and technical ground. The tragedy is not only that the open internet is dying, but that it is being killed by design. Multipolar sovereignty has imposed a veto on the very conditions that once made the network global, interoperable, and free. What remains will still be called “the internet,” but it will be a diminished thing—an archipelago of state-controlled stacks, stitched together by commerce and diplomacy rather than by shared protocols and common rights. The Economic Fallout: Commerce in the Age of Code Wars For businesses and innovators, this geopolitical veto translates into structural uncertainty. The borderless marketplace that defined the first three decades of the Internet is disintegrating into incompatible regimes. A global platform now confronts multiple Internets — each with its own laws on data, content, encryption, and AI. To operate in China, a firm must accept censorship; in the EU, strict privacy compliance; in the U.S., security vetting; in India or Brazil, data localization. The result is duplication of infrastructure, rising compliance costs, and the erosion of network effects that once made global scale possible. This is the “Code War” — competition not for territory, but for the standards, architectures, and supply chains that define digital sovereignty. Semiconductors, cloud infrastructure, and even content moderation have become tools of statecraft. Data itself has become a strategic resource — regulated, taxed, and weaponized. Firms are forced to act like micro-states: negotiating access, forming alliances, hedging against regulatory and geopolitical shocks. The agile survive by adapting to local conditions; the rigid collapse under compliance burdens. Innovation slows, but does not stop — it mutates, localizes, fragments. The dream of a single digital marketplace gives way to a constellation of semi-autonomous economies, bound together by commerce but divided by code. The Death — and Possible Rebirth — of the Commons The tragedy of UNGA80 is that the world did not debate whether to preserve the open Internet; in many ways, it declared its obsolescence. The open network, once the most ambitious experiment in transnational governance, has been judged incompatible with the age of sovereign resurgence. And yet, history suggests that systems built on veto and balance eventually yield to new forms of integration. The Concert of Europe maintained peace for decades but could not suppress the forces of nationalism and industrial transformation. Its collapse gave rise to a more volatile but also more democratic international order. The digital world may follow a similar trajectory. The current phase of multipolar fragmentation may, paradoxically, sow the seeds of renewal. Civil society, though marginalized, continues to exist in the interstices — in transnational advocacy networks, open-source communities, and decentralized technologies. The infrastructure of interconnection still resists total enclosure. Data, like water, seeks to flow. The question is whether the next generation can imagine a new kind of digital commons — one resilient enough to coexist with sovereignty yet open enough to preserve universality. That will require not nostalgia for the unipolar past, but innovation in governance itself: federated rights frameworks, interoperable privacy regimes, technical standards designed for pluralism rather than uniformity. Conclusion: A Concert Without Harmony The Internet was once a symbol of humanity’s shared destiny. Today it mirrors its divisions. What UNGA80 made explicit is that the age of global digital unity is over, not through accident but through choice. The United States, China, and the Global South each claim to defend the public good, yet all have subordinated the network’s universality to their own strategic visions. The open Internet has not been defeated by its enemies; it has been vetoed by its creators. We are living through the birth of a Concert of Networks — a digital order held together by balance, not by belief. It promises stability, but at the cost of openness; security, but at the expense of freedom. Whether that equilibrium can endure, or whether it too will collapse under the weight of its contradictions, will define the next era of the digital century. The Internet has survived pandemics, cyberwars, and revolutions. Its greatest test now is political. The question is no longer whether the open Internet can survive multipolarity — but whether a multipolar world can survive the Internet it is creating. To this end, thechallenge for the next generation is whether they accept this fate as irreversible, or whether they can imagine, and fight for, a new kind of digital commons that transcends the veto. The answer will determine not only the future of the network, but the fate of global rights and freedoms in the 21st century. If you haven’t been following the quiet bureaucratic war over the future of WSIS, here’s the short version: the UN Office for Digital and Emerging Technologies (ODET) is trying to take control of it.
In its submission to the WSIS+20 Zero Draft — available here — ODET lays out what looks, at first glance, like harmless institutional housekeeping. But look closer, and you’ll see something more ambitious: an attempt to quietly centralize power over digital governance inside the UN system, rewrite the WSIS architecture, and turn a bottom-up, decentralized process into a top-down bureaucracy. From Coordination to Control The key move in ODET’s submission is its push to take over the coordination of UNGIS — the UN Group on the Information Society — and make it permanent. It calls for a “permanent secretariat” and an “expanded membership.” On paper, that sounds tidy. In practice, it means ODET would control the staff, the agenda, and the information flows that shape UN digital policy. It’s like putting one department in charge of refereeing an entire ecosystem — and then giving it the power to rewrite the rules. The proposal also suggests integrating the Global Digital Compact (GDC) into the WSIS architecture, positioning ODET as the bridge (and gatekeeper) between the two. That’s not coordination; that’s consolidation. And it comes wrapped in the kind of bland technocratic language that usually hides major power grabs: “efficiency,” “agility,” “avoiding duplication.” Translation: we’ll make things simpler by putting them all under our control. Killing the WSIS Spirit For those who remember what WSIS was meant to support — a multistakeholder, bottom-up, inclusive process — ODET’s vision is a betrayal. The WSIS model was built on decentralized governance — the idea that no single institution, not even the UN, should dictate how the digital world is managed. Instead, it should be an open ecosystem of governments, civil society, the private sector, and the technical community working together, loosely coordinated but not controlled. ODET’s approach flips that logic. It replaces the distributed “network” model of governance with a hierarchical chain of command. Under its proposal, UNGIS becomes the hub, ODET the hub’s operator, and everyone else — from civil society to regional networks — mere “stakeholders” in an architecture they no longer co-own. That’s not coordination. That’s capture. Empire Building in the Name of “Coherence” Let’s be honest: ODET’s submission isn’t about the future of Internet governance. It’s about building an empire inside the UN system.
The WSIS process has always been messy, slow, and imperfect. That’s the point. It was designed to resist capture — to ensure that no single actor could dominate the global conversation on the information society. ODET’s “streamlining” threatens to erase that diversity and replace it with UN bureaucracy. Why This Matters This isn’t an internal turf war. It’s about who gets to shape the future of the Internet — and how. If ODET’s plan succeeds, Internet governance will become less open, less accountable, and less flexible. Decisions that once required broad consultation could instead become centralized, quiet, and procedural. Stakeholder participation would be reframed as “input,” not “influence.” The Internet’s future would be negotiated through PowerPoint decks rather than public dialogue. We have already seen a preview of this during the Global Digital Compact process, which ODET kept tightly controlled and far less inclusive than the ongoing WSIS+20 review. And once a UN office claims the center of gravity in digital governance, good luck taking it back. Bureaucratic empires rarely shrink; they expand. Reclaiming the WSIS Legacy The Internet governance community — from governments to NGOs to technical experts — must push back clearly and publicly. Here’s what that means:
If we care about keeping Internet governance open, we can’t sleepwalk through this. ODET’s submission might sound like administrative housekeeping — but it’s more like a hostile takeover. WSIS was never supposed to be owned. It was meant to be messy, participatory, and alive. If ODET gets its way, that spirit will die quietly in the footnotes of a UN report. Once, the Internet promised to dissolve borders. It was humanity’s great experiment in connection — a place where ideas, innovation, and communities could move freely, unburdened by geography or ideology. But that promise is fading. The open web is being fenced off, piece by piece, as governments redraw the digital map in their own image. What began as a global commons is being carved into territories of control.
“Digital sovereignty” has become the new political currency — invoked everywhere from Brussels to Beijing, Washington to Delhi. The term sounds empowering, even patriotic. Every nation has a right to protect its citizens’ data, defend its networks, and guard against foreign interference. But what’s happening in practice isn’t empowerment — it’s enclosure. The language of sovereignty is being used not to secure openness, but to justify its opposite. This is sovereignty turned inward: a shift from protection to possession, from self-determination to self-isolation. Instead of building the capacity to participate confidently in a connected world, states are seeking to wall themselves off from it — locking data within borders, dictating what technologies can be used, and deciding which platforms their citizens can access. The result is an Internet that looks less like a shared commons and more like a digital archipelago — a chain of walled gardens, each ruled by its own laws, standards, and surveillance systems. We are watching the Internet fragment before our eyes. And make no mistake: that is not sovereignty. It is the slow-motion disintegration of the very idea that made the Internet revolutionary in the first place — that connection, not control, is what gives us power. The Wrong Kind of Sovereignty China’s model of cyber sovereignty—rooted in state control, censorship, and domestically contained platforms—was once dismissed as a uniquely authoritarian project. It isn’t anymore. What’s striking today is how its core logic has seeped across borders, reappearing in new forms and new rhetoric. The same impulses—control, containment, and consolidation—are now visible in democracies that once championed the open Internet. Every major power has its own version of digital sovereignty, and yet none of them add value to the open, global network that made the Internet transformative. Instead, each one chips away at it. China’s model is the most coherent—and therefore the most dangerous. It has given the state total command over its domestic digital ecosystem while exporting its technologies, norms, and infrastructure abroad. Through initiatives like the Digital Silk Road, China isn’t just wiring the developing world; it is wiring it in its own image. For smaller economies, especially in the Global South, Chinese-style digital sovereignty is increasingly sold as a model of modernity: efficient, secure, self-contained. But it comes at a cost—trading openness for obedience, innovation for surveillance, and interdependence for dependency. Europe’s approach, meanwhile, is paralyzed by contradiction. Its ambition to build “digital sovereignty” reflects legitimate frustrations: decades of dependency on American tech giants, deep unease over data exploitation, and a desire to restore public control. Yet the European project has become more regulatory than visionary. It protects, but it does not propel. By walling off data, fragmenting digital markets, and privileging “strategic autonomy” over interoperability, Europe risks creating an Internet defined by caution rather than creativity. Its idea of sovereignty is defensive, not generative—a bureaucratic bulwark rather than a digital engine. And as Europe continues to debate its definitions, China has already operationalized its own. Beijing’s clarity gives it power; Brussels’ confusion leaves it perpetually behind. The United States, once the self-appointed steward of the open Internet, now finds itself at war with its own legacy. Caught between Silicon Valley libertarianism and Washington’s new techno-nationalism, America can’t decide what sovereignty means anymore. Is it about protecting innovation from government interference—or protecting government interests from foreign technology? The result is a kind of strategic schizophrenia: a country that preaches openness abroad while fragmenting it at home through export controls, data restrictions, and internal political dysfunction. What all these models share is a failure to strengthen the open Internet or the global economies that depend on it. They add layers of bureaucracy and suspicion, but no new connective tissue. They privilege control over collaboration, and in doing so, they erode the very principles that made the digital age an engine of growth and democracy. Worse, these models are being exported. The Global South is now the new frontier for competing visions of sovereignty—each sold as a path to digital independence but delivering the opposite. European-style compliance regimes are too costly and complex for emerging economies to implement, locking them out of global markets. The Chinese model offers instant infrastructure and turnkey governance—but at the price of surveillance and dependence. The American model, with its fractured politics and corporate dominance, offers little coherence at all. In this global race to define digital sovereignty, only one player is winning: China. It has moved beyond rhetoric to implementation, embedding its rules, hardware, and governance logic across continents. Europe is still trying to draft its definitions. America is still arguing with itself. And in the vacuum between them, Beijing is setting the terms of the 21st-century Internet. Sovereignty, once a promise of empowerment, is becoming a vehicle for enclosure. Nations may believe they are asserting control, but in reality, they are narrowing their own horizons—sacrificing the openness, interoperability, and trust that made the digital world a space of possibility. Sovereignty built on isolation doesn’t make the Internet stronger; it makes everyone weaker. True digital sovereignty is less about ownership and more about agency: who can shape, influence, and act within the system—and whether the system is structured to make that agency real. The False Comfort of Borders There is something instinctively reassuring about borders. They promise safety, predictability, control. In the digital realm, this instinct has been reawakened: the belief that by containing data, restricting technologies, or building “national” platforms, a country can protect itself from the chaos of the open Internet. It’s a seductive illusion — and a profoundly dangerous one. Closed systems have never worked. Not in biology, not in economics, not in technology. A coral reef cut off from the ocean dies; a market sealed from trade stagnates; a network designed to isolate will, inevitably, atrophy. The same laws apply online. The more tightly a system tries to contain itself, the more brittle it becomes. When states attempt to “sovereignize” the Internet by enclosing it within national borders, they are not strengthening it — they are suffocating it. Information, innovation, and opportunity depend on flow: of data, of talent, of ideas. The moment that flow is interrupted, vitality drains away. What remains are echo chambers of control — self-referential, self-affirming, and ultimately self-defeating. History offers countless warnings. The Ming Dynasty’s maritime ban, intended to preserve the empire’s supremacy by shutting out foreign influence, instead strangled China’s own innovation and maritime prowess. The Soviet Union’s scientific isolationism kept information secure — and progress decades behind. Europe’s pre-Enlightenment mercantilism, obsessed with hoarding resources and protecting national markets, stifled creativity until it was swept away by the Industrial Revolution. Each of these systems promised strength through control, and each ended in decline through stagnation. The same pattern now plays out online. China’s “Great Firewall” is admired by other governments as a triumph of digital control. But it survives only through constant intervention — a fragile equilibrium of filters, algorithms, and propaganda that must be endlessly maintained to preserve a manufactured reality. It produces compliance, not creativity; predictability, not progress. In ecosystems and in economies alike, monocultures are efficient until they collapse. Europe’s bureaucratic closure achieves a different but equally sterile outcome. By surrounding innovation with layers of compliance and national preference, it builds a digital Maginot Line — impressive on paper, irrelevant in practice. Regulation becomes ritual; sovereignty becomes paperwork. And in the United States, techno-nationalism and internal fragmentation have created digital walls of a different kind — invisible, but no less confining. The country that once championed openness now exports anxiety: the fear of dependency, the fear of competition, the fear of losing control. The irony is that every wall built in the name of sovereignty ends up producing dependence of another kind. Isolated systems need constant reinforcement — subsidies, surveillance, censorship, control. They become trapped in feedback loops of their own design. And once isolation becomes policy, escape becomes politically impossible. The lesson, from history and from the network itself, is simple: systems that close themselves off may survive, but they do not evolve. Openness, by contrast, is messy but adaptive. It learns by exposure, not by isolation. Like an immune system, it grows stronger through interaction. Borders may comfort politicians, but they do not protect societies. The only real protection in the digital age is resilience and resilience depends on openness. A New Definition of Digital Sovereignty If closing borders weakens the digital world, then sovereignty must be reimagined not as exclusion, but as engagement. The task is not to reject sovereignty altogether, but to redefine it for an age in which interdependence is the default condition. True digital sovereignty is not the power to isolate; it is the capacity to participate. It is the confidence to act freely and collaboratively in a networked world without fear of domination, coercion, or dependency. This new vision of sovereignty begins with a simple premise: openness is not the enemy of control — it is its foundation. Systems that are open, transparent, and participatory are not weaker; they are stronger. They are more resilient, more adaptable, and more legitimate. Across history, openness has consistently proven its transformative power. The Renaissance flourished because knowledge, art, and ideas circulated across borders; the post-war Marshall Plan succeeded by linking European economies into cooperative networks; the modern Internet itself scaled because protocols and standards were shared freely. Open systems accelerate innovation, grow markets, and empower individuals, whereas closed systems produce stagnation, inefficiency, and inequality. Open digital sovereignty is relational, not territorial. Its strength does not come from the ability to wall others out, but from the ability to connect responsibly. No state or corporation can govern a system as vast and interdependent as the Internet alone. Power must therefore be exercised through collaboration, not coercion — through rules, architectures, and frameworks that enable shared agency rather than exclusive control. In practice, open sovereignty has distinct design principles. It is interoperable by default: systems, standards, and platforms must communicate across borders rather than fracture along them. Open protocols, portable data formats, and transparent APIs prevent dependence on any single vendor or jurisdiction, ensuring that the flow of information remains global and equitable. It is transparent in governance. Decisions about digital infrastructure, data use, and standards must be made openly and be subject to scrutiny. No algorithmic system, no international digital policy, and no corporate governance model should operate in the dark. Visibility enables legitimacy: citizens, technologists, and governments alike must be able to see, influence, and hold accountable the systems that shape their lives. Transparency is not a bureaucratic ideal—it is the safeguard of rights, the bedrock of trust, and the guarantee that digital sovereignty aligns with the international human rights framework. Closed models, by contrast, concentrate power, restrict expression, and weaken privacy protections. Open sovereignty is modular and adaptive, allowing nations to pursue legitimate interests—security, privacy, cultural identity—without severing themselves from the global network. Sovereignty in this model is dynamic stewardship, not static ownership. It evolves as technologies advance, threats change, and social expectations develop. A nation can protect its citizens while remaining an active participant in the global digital ecosystem. It is distributed in enforcement. Rather than a top-down hierarchy, open sovereignty depends on overlapping circles of accountability: regional frameworks aligned on principles, global mechanisms mediating disputes, and local actors ensuring policies are grounded in context. This does not eliminate sovereignty; it decentralizes it. Power is not surrendered, but shared. The architecture of open sovereignty mirrors the architecture of the Internet itself: decentralized, interoperable, and resilient. Its infrastructure favors open-source technologies and shared standards over proprietary lock-in; distributed infrastructure over centralized chokepoints; and data governance models that empower individuals and communities rather than concentrating power in states or corporations. In this design, control is not exercised through isolation, but through transparency, collaboration, and trust. Most importantly, open digital sovereignty is human-centered. It locates sovereignty not in bureaucracies, servers, or code, but in the autonomy, creativity, and collective power of people. Individuals are not passive subjects of policy—they are active participants in shaping networks. Policies and technologies must uphold these human capabilities rather than subordinate them to national security or corporate interests. Openness is thus inseparable from human rights: freedom of expression, access to information, privacy, and participatory governance are strengthened when networks remain open, accountable, and inclusive. By contrast, the closed systems currently associated with “digital sovereignty” undermine these rights, concentrating power in ways that limit choice, stifle dissent, and surveil citizens. Open sovereignty is not just a technical framework—it is a political ethic. It replaces the old logic of control with a new logic of connection: a belief that agency in a networked world is sustained not by fear, walls, or coercion, but by cooperation and shared responsibility. Historical evidence is clear: societies that embraced openness grew stronger and more prosperous, while those that closed themselves off stagnated. The Internet’s founding design already gave us the blueprint: a network of networks, resilient because it is shared, creative because it is open. To preserve that spirit, we must reclaim sovereignty not as a tool of separation, but as a practice of collective freedom. Only then can the digital world be both sovereign and shared—governed not by the power to close, but by the courage to connect. Openness is strategic power. It is the key to resilient economies, empowered societies, and rights-respecting governance. If democracies truly wish to preserve digital sovereignty, they must lead by example. They must show that engagement, transparency, and interoperability are not vulnerabilities, but sources of strength. In a world where China has already defined sovereignty in terms of control, Europe struggles to articulate a coherent model, and the United States battles internal fragmentation, demonstrating the value of openness is not merely idealistic—it is urgent. The future of the Internet—and the freedom, prosperity, and rights it enables—depends on this choice: will sovereignty be defined by walls, control, and fear, or by connection, cooperation, and collective empowerment? The answer will shape not just networks, but the moral and economic foundations of the digital age. The Choice Before Us We are standing at a crossroads. One path leads to a fragmented digital order, defined by fear, suspicion, and walls—virtual fortresses that isolate economies, stifle innovation, and undermine human rights. The other path leads to an open, interoperable Internet where sovereignty is not about exclusion, but participation; not about control, but collaboration; not about possession, but stewardship. We should not be nostalgic for the early days of the Internet—but we should remember its founding insight: connection, not division, is what gives us strength. Societies that embrace openness grow richer, more resilient, and more creative. Markets flourish when ideas and technologies flow freely. Citizens thrive when networks empower rights rather than restrict them. The historical record is unambiguous: isolation and control produce stagnation, while openness fuels progress—from the Renaissance and the spread of scientific knowledge to the explosion of innovation driven by open-source software and shared Internet protocols. Defining digital sovereignty through openness is not a naïve ideal—it is a pragmatic necessity. Policymakers must think beyond the illusion of control. True sovereignty is exercised through interoperability, transparency, and collaboration. It is adaptive, human-centered, and resilient. It ensures that nations can pursue legitimate interests—security, privacy, and cultural preservation—without fragmenting the global network or undermining rights. The opportunity is clear: governments, industry, and civil society can lead together. They can design standards that are open by default, build platforms that cross borders, and establish governance mechanisms that reinforce trust rather than fear. They can demonstrate that sovereignty is not about walls, but about agency—about the ability to act confidently in a connected world while protecting people, economies, and values. The choice before us is urgent. Will we define sovereignty as division, or as connection? Will we protect our digital future by retreating behind virtual borders, or by shaping it together through openness and collaboration? The answer will determine not only the architecture of the internet, but the freedoms, rights, and opportunities of generations to come. Last week, I was in Brazil and had the chance to engage directly with CGI.br (the Brazilian Internet Steering Committee) and the broad community of stakeholders it brings together. The energy, openness, and shared sense of purpose that animate this ecosystem are rare anywhere in the world. Yet, in conversations with policymakers, technologists, and civil society leaders, I also heard a growing concern—that the very model that has made Brazil’s Internet a global example of participatory governance is now under threat. Those discussions prompted this reflection.
A Visionary Beginning In 1995, Brazil did something truly visionary. At a time when the Internet was little more than a curiosity in most of the world, the Brazilian government created the Comitê Gestor da Internet no Brasil or the Brazilian Internet Steering Committee (CGI.br). This was not a symbolic committee or a token advisory group. It was an audacious experiment in governance: government, private sector, academia, and civil society, each given equal voice, entrusted to guide the growth and stewardship of the Internet in Brazil. This decision was revolutionary. Across the globe, most countries approached the Internet either as a technical tool to be managed by bureaucrats or as a commercial product to be monetized by corporations. Brazil chose a different path: a multistakeholder model, in which the network would be governed collaboratively, with transparency and accountability at its core. This framework laid the foundation for one of the most vibrant and resilient Internet ecosystems outside the United States. In 2005, one of CGI.br’s most consequential decisions was to entrust its operational arm, the Network Information Center (NIC.br), with implementing its policies and managing Brazil’s core Internet infrastructure. Since then, NIC.br has become an essential pillar of Brazil’s digital society. Currently, NIC.br oversees several core services, including:
These are not abstract functions—they are the invisible scaffolding that keeps Brazil’s Internet robust, secure, and accessible. For thirty years, CGI.br has not only maintained technical excellence but has nurtured a culture of openness, trust, and collaboration. Policy decisions are debated and negotiated across sectors. Civil society groups have meaningful input. Academia contributes research, while industry provides operational expertise. The result is an Internet that is not a government monopoly, nor a playground for telecom giants or for big technology companies, but a commons—a shared space built with foresight and care. The creeping takeover And yet, all of this is now under threat. Over the past two years, Brazil’s telecommunications regulator, Anatel, has made moves that amount to a stealth takeover of the Internet governance space. The pretense is modernization, simplification, and efficiency. But the pattern is unmistakable: Anatel is seeking to collapse the longstanding legal distinction between telecommunications and Internet services, codified in the celebrated Norma 4 of 1995. In April 2025, Anatel formally repealed that separation, equating Internet access with telecommunications services. By July, a bill was introduced to expand Anatel’s authority over Internet infrastructure services—including IXPs, DNS, and cloud providers, most of which are currently stewarded by CGI.br. By September, the government issued a decree merely to clarify responsibilities between the two bodies. The trajectory is clear and already known: a regulator designed to oversee telcos is attempting to assert authority over the Internet itself. The justifications are always couched in technocratic language: regulatory simplification, fighting piracy, strengthening cybersecurity. Yet anyone familiar with Anatel’s recent actions sees the pattern for what it is: a political power grab. In a world where traditional telecom regulation is losing economic weight and prestige, Anatel is chasing relevance—without regard for the decades of multistakeholder governance that have made Brazil a model for the world. A Quiet Revolution That Worked To understand what’s at stake, it’s worth recalling what CGI.br and NIC.br have achieved. When a Brazilian entrepreneur registers a domain name ending in .br, it’s Registro.br—one of NIC.br’s four key arms—that ensures the system works smoothly and securely. Today, Brazil’s .br is one of the most stable and trusted country-code domains in the world, with over five million active registrations—more than any other in Latin America. Its operations are fully transparent and self-financed, avoiding the political capture that plagues many national domain systems. Then there’s IX.br, which runs one of the largest Internet exchange point (IXP) systems on the planet. In São Paulo alone, IX.br handles over 20 terabits per second of traffic, making it one of the world’s top five exchange hubs. That invisible infrastructure keeps Brazilian traffic local—reducing latency, cutting costs, and keeping data within national borders without resorting to censorship or surveillance. In 2022, when a submarine cable failure disrupted international routes, IX.br’s distributed architecture kept domestic services online. Few citizens noticed—but engineers around the world did. Meanwhile, CERT.br (Brazil’s Computer Emergency Response Team) has quietly become a global model for cybersecurity coordination. It was among the first in the Global South to institutionalize a trusted, neutral mechanism for handling cyber incidents. During the massive ransomware waves of 2017–2018, CERT.br’s collaborative model—linking telecom operators, banks, and civil society—kept Brazil’s networks resilient. Because it was not an arm of the government, companies and NGOs shared information openly, knowing it wouldn’t be weaponized for political purposes. Trust, not coercion, proved to be the real cybersecurity asset. And then there’s CETIC.br, the research center that produces some of the most comprehensive, publicly accessible data on Internet use and digital inclusion anywhere in the world. Every year, it publishes detailed national surveys on topics ranging from broadband access in rural schools to digital literacy and e-commerce. Policymakers rely on it. So do international organizations: UNESCO, the OECD, and the UN’s Internet Governance Forum (IGF) all cite CETIC.br’s datasets as models for evidence-based digital policy. Each of these institutions is technical in function but political in spirit. Together, they embody a philosophy that has defined Brazil’s approach to the Internet: that no single actor—especially not the state—should monopolize the rules of the digital game. Anatel’s Power Grab That principle is now under siege. In April 2025, Anatel revoked Norma 4, a foundational 1995 regulation that had clearly separated Internet services from traditional telecommunications. The change, little noticed by the public, effectively collapsed that distinction—giving Anatel a legal pretext to treat Internet infrastructure as just another branch of telecom. The agency’s ambitions have since expanded. Anatel has floated plans to extend its oversight to Internet exchange points (IXPs), the domain name system (DNS), and even cloud service providers—areas that have long fallen under CGI.br’s purview. Its argument is couched in bureaucratic terms: efficiency, modernization, coherence. But beneath the surface lies a shift in power from the many to the few. Anatel’s logic is steeped in the command-and-control mindset of the telecom era. It sees networks as assets to be licensed, taxed, and policed. But the Internet does not obey the same physics. Its vitality lies in decentralization, interoperability, and permissionless innovation. Handing its stewardship to a top-down regulator would be like asking a customs officer to curate culture or an accountant to run a rainforest. There are warning signs already. The so-called “network fees” proposal, which would have charged large content providers for delivering traffic to end-users, was a barely disguised attempt to turn the Internet into a toll road. Earlier, signed a cooperation agreement with Ancine in 2025 to fight audiovisual piracy, which gave Ancine the authority to order ISP blocks. More recently, it began drafting regulations for data centers—without public consultation, and with language so vague it could extend to cloud and hosting providers. Each of these actions chips away at the participatory ethos CGI.br has built. And once that ethos is gone, it won’t come back. Brazil’s Global Standing What makes this moment especially tragic is that Brazil has long been a global beacon of inclusive Internet governance. At the 2014 NETmundial conference in São Paulo—a response to Edward Snowden’s revelations about U.S. surveillance—Brazil hosted the world’s first truly global multistakeholder summit. Governments, companies, activists, and engineers drafted the NETmundial Principles, a landmark declaration affirming that Internet governance must be open, participatory, and rights-based. The event cemented Brazil’s reputation as a bridge between the Global North and South in digital policy. Those principles have since shaped discussions at the United Nations, the OECD, and the IGF. If Brazil lets CGI.br be sidelined now, it sends a dangerous message: that even the most successful experiments in digital democracy can be undone by bureaucratic ambition. It would hand authoritarian states a convenient talking point—proof that multistakeholderism is too fragile, too messy, too idealistic for the real world. What Brasília Must Do The Brazilian government faces a simple but profound choice: defend the model that has made it a global digital leader, or watch that legacy be erased. Defending CGI.br does not mean freezing it in time. The Internet has changed, and governance must evolve. But evolution must not mean erasure. Brasília can take three concrete steps:
These are not merely administrative choices. They are declarations of values—of what kind of Internet Brazil believes in. What’s at Stake Most Brazilians never think about IX.br when Netflix loads instantly or about CERT.br when a phishing campaign is thwarted. They don’t see CETIC.br’s data quietly informing school connectivity policies or Registro.br’s systems keeping e-commerce safe. That invisibility is CGI.br’s greatest triumph—and its greatest vulnerability. If Anatel’s encroachment succeeds, decision-making will shift from open debate to bureaucratic decree. The Internet will be redefined not as a commons but as a utility—regulated for compliance, not for creativity. Civil society will lose its seat; academia will lose its voice; innovation will slow. The world is watching closely. In China, “cyber sovereignty” has become an instrument of censorship and control. In Russia, the “sovereign Internet” law seeks to allow the Kremlin to cut off the country from the global web at will. Even in democracies, regulators are tightening their grip in the name of safety or efficiency. Against that backdrop, Brazil has stood for something different: the conviction that democracy can scale, that pluralism can be engineered into the very architecture of the Internet. Losing that would not just be a national tragedy—it would be a global one. Conclusion Thirty years ago, Brazil imagined a different future for the Internet—and made it real. CGI.br became a beacon of what shared stewardship could look like in practice: efficient, participatory, resilient. Today, that beacon flickers under the shadow of bureaucratic ambition. The question for Brazil is not whether it can modernize Internet governance. It’s whether it can do so without forgetting what made it special in the first place. In a world increasingly defined by digital authoritarianism, defending CGI.br is not nostalgia—it’s necessity. Brazil’s democracy, and the open Internet itself, may well depend on it. Lately, all I hear is talk about data governance, as if the act of discussing it will automatically create clarity, rather than forcing us to confront the hard choices about how data flows, who controls it, and what it enables. One thing is clear: everyone talks about it, but almost no one is actually trying to define what it really means. In workshops, boardrooms, and conferences, the term is thrown around as if everyone agrees on what it means — but they don’t. Instead, it has been elevated into a political issue, a high-level debate about sovereignty, regulation, and control. This politicisation inevitably clouts discussions and risks subverting some of the most important concepts about data itself.
And yet, we cannot afford to ignore the stakes. Data is not merely an operational input or a corporate asset; it is the very lifeblood of the Internet. Every interaction online—every search, click, like, or transaction—creates a trail of information that fuels platforms, drives innovation, and sustains ecosystems. Without continuous flows of data, the Internet as we know it would grind to a halt: social networks could not connect billions of users, e-commerce platforms could not optimize supply chains, and content recommendation engines could not personalize experiences. In short, the Internet is alive only because data circulates through it. Artificial intelligence exemplifies this dependence most clearly. AI does not exist in isolation; it is a reflection of the data it consumes. A language model, a recommendation engine, or a predictive maintenance system is only as intelligent as the underlying datasets that train it. Sparse, biased, or low-quality data produces poor outcomes, while abundant, diverse, and well-curated data unlocks potential. Consider large language models like GPT or image generation systems like DALL·E: their performance, nuance, and usefulness scale directly with the volume, diversity, and quality of the data ingested. These models rely on massive datasets containing text, images, and other structured or unstructured information to learn patterns, correlations, and semantic relationships. Every insight, every automation, every predictive signal is inseparable from the underlying data that fuels it. For example, in natural language processing, a model’s ability to generate coherent and contextually appropriate responses depends not just on the quantity of text it sees, but on the richness of linguistic structures, idioms, and domain-specific knowledge embedded in that data. Similarly, image generation models learn to capture style, composition, and context by analyzing millions of examples. Sparse, biased, or low-quality datasets produce outputs that are inaccurate, skewed, or unreliable. AI is not merely “software”; it is data operationalized. The algorithms themselves define how patterns are extracted, but without high-quality data, even the most sophisticated architecture fails. Preprocessing, normalization, and annotation further shape the model’s capabilities, ensuring that the raw data is structured and labeled in a way that the model can learn from. Data determines everything from generalization and robustness to fairness and ethical alignment. In essence, the model is a reflection of the data it consumes — without it, AI is inert code; with it, AI becomes actionable intelligence capable of transforming industries Data is also the foundation of modern economic value, but it has become highly commodified, turning personal behaviors, interactions, and content into tradable assets. Platforms like Google, Amazon, and TikTok generate billions not simply because of their technology, but because of the continuous extraction and monetization of this data. Every recommendation, ad placement, or search ranking relies on patterns discovered in historical data — patterns derived from real people’s lives. Governments’ drive to govern data is understandable, given its immense economic and strategic value, but there is a risk that this focus on control reduces complex social and ethical questions to questions of ownership and access. Treating data merely as an economic asset can obscure the broader consequences: who benefits, who is surveilled, and whether restricting or centralising data flows may hinder innovation and the public good At the same time, the social and ethical stakes are enormous. AI systems trained on biased, incomplete, or unrepresentative datasets can unintentionally amplify inequality, reinforce stereotypes, or spread misinformation. For instance, facial recognition models have historically misidentified people of color at higher rates due to underrepresentation in training data, while predictive policing algorithms have disproportionately targeted marginalised communities by relying on biased historical crime records. Limiting access to data too aggressively, however, can be equally harmful. In healthcare, restrictive interpretations of privacy laws may prevent researchers from accessing enough patient data to train models capable of detecting rare diseases or predicting epidemics. In climate modeling, the inability to integrate comprehensive environmental datasets can reduce the accuracy of predictions critical for policy and disaster response. Even in industrial applications, overly constrained datasets can stifle innovation in AI-driven logistics, manufacturing, and energy efficiency. Proper governance must therefore strike a delicate balance. It is not enough to protect privacy or enforce sovereignty — governance must ensure that data remains accessible in controlled, ethical ways that maintain public trust while fueling innovation. This involves technical safeguards such as differential privacy, federated learning, and secure multi-party computation, alongside clear legal frameworks and transparent policies. Without this balance, society risks both ethical harm from biased or misused AI and stagnation in areas where data-driven solutions could deliver enormous social value. Beyond the “New Oil” Myth For years, the cliché “data is the new oil” has dominated discussions. At first glance, it seemed to explain why data was so valuable. But the comparison is outdated and misleading. Oil is finite, consumed when used, and traded as a commodity. Data is infinite, copied and recombined endlessly, and its value depends on context, quality, and consent. Clinging to the oil metaphor reinforces a false premise: that data is simply a resource to be mined. In reality, it is relational, networked, and generative. Misunderstanding this leads to governance debates that miss the point: it’s not only about restricting or controlling data, but about understanding the systems it powers — the Internet itself and the AI systems increasingly shaping economies, societies, and daily life. The geopolitical dimension of data governance is easy to see. In India, for example, a 2018 directive required all payments data to be stored within the country. On paper, this protects privacy. In practice, it strengthens domestic control and shifts economic power. Similarly, China’s Cybersecurity Law and Data Security Law enforce local storage and strict oversight, creating barriers for foreign companies operating in the market. Tesla, for instance, had to build local data centers in China to store car data domestically — a vivid reminder that data has become a national strategic asset. The European Union takes a different approach, emphasizing privacy and individual rights. GDPR is already a model for privacy regulation globally, and initiatives like Gaia-X aim to create a more sovereign cloud ecosystem. But even here, uncertainty abounds: will this lead to transparency and interoperability, or will it fragment the global digital economy further? For companies operating across borders, compliance is not a checklist; it’s a moving target. A practice acceptable in Berlin may be illegal in Bangalore. Rules shift, interpretations change, and geopolitical pressures amplify the ambiguity. The Vagueness Problem Even without politics, data governance suffers from a lack of clarity. Terms like “personal data” or “sensitive data” vary across regions, creating practical challenges. GDPR defines personal data expansively, while U.S. states have different thresholds. Some Asian jurisdictions exclude work-related emails. Sensitive data might include health records in one region, geolocation in another, or political opinions somewhere else entirely. This vagueness is more than an academic problem. Apple’s delayed rollout of a child-protection scanning feature in 2021 highlighted the tension between privacy, regulatory interpretation, and public opinion. The company had intended to scan photos locally for illegal content, but privacy advocates argued the method blurred lines between personal and sensitive data. Apple paused the rollout — a clear illustration that vague definitions create uncertainty and slow progress. Emerging Technologies, Emerging Unknowns AI and other emerging technologies add complexity. Large language models, image generators, and other AI systems are data-hungry. Every model depends on vast datasets to learn, adapt, and create value. Yet, ownership, consent, and copyright remain unsettled. Stability AI and OpenAI have faced lawsuits over training data; the courts are still catching up. Without data, AI cannot exist, but unrestricted data extraction risks ethical and legal violations. Quantum computing looms on the horizon, threatening encryption standards that underpin data security today. Blockchain introduces a paradox: immutability versus privacy laws like Europe’s “right to be forgotten.” In each case, technology moves faster than regulation, and the gap creates uncertainty for those attempting to govern data responsibly. It would be tempting to imagine a global data governance framework, but competing priorities make this unlikely. The U.S. emphasizes innovation and market growth. The EU emphasizes privacy and human rights. China emphasizes state control. India seeks a balance between growth and sovereignty. These differing philosophies make consensus improbable. Companies are improvising. TikTok, for instance, launched “Project Texas” to store U.S. user data domestically, hoping to satisfy regulators while maintaining global operations. Such solutions are temporary and reactive, highlighting the absence of a universally agreed framework for governing data responsibly. Living with the Grey Here’s the reality I keep coming back to: we cannot wait for perfect clarity. Data governance will remain ambiguous as law, technology, and politics evolve at different speeds. Organizations that thrive will embrace adaptability, embed privacy and ethics into system design, and treat governance as a living practice rather than a compliance checkbox. At the same time, we need to move beyond abstract discussions. Though we cannot aim for an absolute definition of data governance, we can start by setting clear parameters and asking the right questions: What kinds of data use are acceptable? How do we balance access, innovation, and privacy? How can we prevent bias, exploitation, and fragmentation without stifling technological evolution? Our goal should not be a single, universally agreed definition — that is impossible — but a practical framework that addresses the legitimate concerns of data extraction and misuse while allowing systems, including AI and the Internet itself, to continue evolving safely and fairly. We must also remember the bigger picture: data is the engine that powers the Internet and AI. Limiting it without understanding the consequences can harm innovation, connectivity, and society. Allowing unrestricted extraction without oversight risks exploitation and loss of trust. The balance is delicate, and achieving it requires more than political posturing; it requires careful thinking, experimentation, and humility. Data governance is no longer a back-office concern. It is a front-page, global challenge, touching politics, technology, law, and ethics. It is both vital and vague. The unknowns are multiplying: AI depends on data to function, quantum computing threatens security, and governments assert sovereignty over digital assets. Yet in this uncertainty lies opportunity. Organizations that accept complexity, act transparently, and embed ethics into governance practices will not just survive — they will help shape the future of the digital world. I’ve heard countless conversations about data governance, but until we stop talking past each other and start defining the parameters, asking the right questions, and building practical frameworks, we risk letting political agendas and vague laws subvert the very systems and innovations we aim to protect. Data governance is about power, trust, and the future of the Internet and AI. It’s too important to leave undefined. On September 23, 2025, Europe’s major telecommunications companies gathered in Brussels to make their case: AI is coming, data traffic is exploding, and Europe’s digital sovereignty is at risk unless policymakers let telcos charge more, consolidate further, and secure a bigger role in Europe’s AI future.
The pitch sounds familiar. Every few years, new technology becomes the reason telcos argue for higher fees or regulatory favors. This time, it’s AI. The problem? Consumers and Europe’s digital market risk paying the price, while the benefits remain unclear. Let’s break down what telcos said—and what it really means. 1. “Connectivity is a strategic asset” What telcos say: Europe’s competitiveness and security depend on strong connectivity. Reality check: Yes, networks matter. But framing connectivity as a “strategic asset” shifts the conversation toward urgency and national security, which makes it politically harder for regulators to reject demands for new fees or special treatment. It’s a way of saying, “Europe can’t succeed in AI without us—so give us more money.” Consumer impact: A truly competitive digital market should expand networks efficiently without turning telcos into tollbooth operators for everyone else online. 2. “AI needs Europe’s networks” What telcos say: AI traffic is growing fast, and AI platforms should help pay for the infrastructure they use. Reality check: Traffic growth is real. But telcos are framing it as a shared European ambition rather than a commercial negotiation. The real goal? New revenue streams—especially from hyperscalers and high-traffic platforms—without necessarily proving that consumers or innovation will benefit. Consumer impact: Higher costs imposed on AI platforms could eventually trickle down to users or slow innovation, especially if fees become arbitrary rather than tied to clear cost-sharing principles. 3. “AI traffic will grow 50% per year” What telcos say: Rising traffic means Europe needs bigger networks, faster. Reality check: True. But using traffic statistics to justify higher fees mixes public policy goals with private revenue strategies. Just because AI traffic grows doesn’t mean handing telcos new rights to charge others is the best—or only—solution. Consumer impact: Without careful policy design, consumers could face fewer choices and higher costs if network fees discourage competition or innovation. 4. “We’re democratizing AI” What telcos say: We’re giving broad access to AI solutions for everyone. Reality check: Most AI traffic will come from cloud platforms and enterprise users, not directly from consumers on telco-owned services. Highlighting “AI for everyone” sounds good but distracts from the main issue: who pays for the underlying infrastructure. Consumer impact: Real democratization comes from open markets and fair rules, not from one industry lobbying for financial privileges under the banner of accessibility. 5. “We need AI for secure networks” What telcos say: Investing in AI helps us secure networks, which benefits everyone. Reality check: Security investments are real, but linking them to demands for higher fees frames them as a public obligation rather than part of normal business operations. It’s another way of turning essential upgrades into bargaining chips. Consumer impact: Europe needs secure networks—but funding them through competitive, transparent markets is better than handing telcos regulatory shortcuts. 6. “Europe needs more investment for prosperity” What telcos say: Without more funding, Europe will fall behind in AI and digital infrastructure. Reality check: This creates a sense of crisis—invest or be left behind—which pressures policymakers to accept telcos’ terms without fully examining whether those terms actually deliver value to consumers or the digital economy. Consumer impact: Long-term digital prosperity comes from competition and innovation, not from concentrating power in a few legacy providers. 7. “We need European scale” What telcos say: Allowing bigger mergers will help us compete globally. Reality check: Consolidation can reduce costs—but it also reduces competition. Framing it as a strategic necessity sidesteps the risk of creating national or regional monopolies with little incentive to innovate or keep prices low. Consumer impact: Europe’s digital sovereignty shouldn’t mean fewer choices or higher prices for consumers. It should mean fair competition under EU rules for everyone—domestic or foreign, big or small. The Bigger Picture: Sovereignty vs. Monopoly Telcos are right about one thing: AI will reshape Europe’s digital economy, and networks matter. But Europe doesn’t need a digital future where a few legacy companies hold all the cards. True digital sovereignty means open competition, fair regulation, and policies that serve consumers—not just incumbents or foreign tech giants. The EU should design rules that ensure everyone pays their fair share without turning the internet into a patchwork of national champions and hidden fees. Because in the end, the goal isn’t to protect telcos or big tech. It’s to build a digital market that works for Europeans. Lately, Brussels has been murmuring about intervening in the interconnection market with DG CNECT signalling interest in enhancing cooperation in the interconnection market, with a particular focus on traffic optimisation. The Commission’steams are raising the possibility that disputes over interconnection (especially involving CAP networks / CDNs) might need regulatory oversight—even if, technically, national regulators (NRAs) claim limited formal power, since CAP networks are “not public” in regulatory definitions. At the same time, there’s increasing talk of BEREC being a “neutral body” to help arbitrate such disputes. European policymakers in Brussels repeatedly stress three values: the need for regulation, network resilience, and recognising connectivity’s role in competitiveness and the Single Market.
On the surface, these are unassailable ideas. Who could oppose regulation of markets that are critical for digital life? Who could reject resilience, competitiveness, network performance? But the devil is in the details, and in the incentives. What is being discussed now isn’t just oversight of “bad actors” or fixing genuine bottlenecks. Instead, the tilt seems toward reshaping how interconnection, traffic flows, CAP/CDNs, and ultimately who pays whom, will work in Europe. And that tilt carries real risks: regulatory capture by big telcos, higher costs, innovation chill, degraded user experience. Evidence on What’s Already True: IP-Interconnection Market Functioning Before getting into the risks, it’s important to note that multiple recent reports find the interconnection market in Europe is already largely working well. Some key findings:
Why Intervention in Traffic Optimisation and Interconnection is a Bad Idea Given the above, here’s what policymakers should beware of: intervention in interconnection (especially traffic optimisation mandates, forced compensation from CDNs, or regulated peering) doesn’t fix problems so much as shift power. It hands incumbents – the big telcos – precisely the tools they need to erect barriers, extract rents, and slow down edge innovation. Some of the specific risks:
Why Targeting CDNs / CAPs is Especially Myopic Regulators seem to assume that CAP networks / CDNs are somehow outside the public interest or regulatory oversight, because they are “private,” “non-public networks,” or because regulators say NRAs don’t have authority over them. But treating them as external to regulation is misleading in two ways:
What the Data and Stakeholders Say
Given these risks, here is a warning for Brussels: if regulation in this space is handled poorly—or even moderately clumsily—the outcome will be worse than the status quo. It will mean capture by telcos, poorer choices for citizens, higher costs for CAPs and content providers, slower innovation, less investment in new kinds of digital infrastructure. Below are some red lines Brussels should avoid crossing:
Conclusion: Europe’s Internet Needs Flexibility, Not Micromanagement DG CNECT’s interest in traffic optimisation, interconnection cooperation, and possibly bolstering regulatory oversight may come from a sincere desire: to ensure networks are resilient, that all member-states enjoy fast, fair connectivity, and that the Single Market functions without digital fragmentation. These are laudable goals. But policy happens in the details. Intervening in interconnection in the name of cooperation or optimisation—especially when CAP/CDN networks are targeted—is not a neutral act. It shifts bargaining power toward those who already control infrastructure; it increases the risk of regulatory capture; it makes innovation harder, riskier, and costlier. It threatens to turn Europe’s internet into a safer but staler environment, where incumbents benefit at the expense of agility and edge innovation. If Brussels wishes to secure Europe’s competitive edge, it should focus on:
The World Summit on the Information Society (WSIS) has always been a peculiar hybrid: part development agenda, part Internet governance framework, part UN coordination exercise. Now, twenty years after Geneva (2003) and Tunis (2005), the WSIS+20 Review “Zero Draft” has landed. It aims to set the future direction for the Information Society in a digital landscape that looks nothing like it did in the early 2000s.
At first glance, the Zero Draft is a tidy piece of UN drafting: it reaffirms, recalls, welcomes, and decides in all the expected places. But beneath the diplomatic prose, there are real shifts—some good, some less so. To understand where this draft moves the needle (and where it doesn’t), it helps to compare it with the WSIS foundations—the Geneva Action Lines, the Tunis Agenda, the 2015 WSIS+10 Review—and the newest kid on the block, the Global Digital Compact (GDC). Importantly, the Zero Draft also does something that Geneva and Tunis only touched on: it puts human rights and the Universal Declaration of Human Rights (UDHR) front and center. In today’s climate of digital authoritarianism, culture wars, and democratic backsliding, that’s no small thing. From Geneva to the Zero Draft: Continuity With Upgrades Back in 2003, the Geneva Declaration of Principles promised a “people-centred, inclusive, development-oriented Information Society.” The Action Lines (C1–C11) set out everything from ICT infrastructure and e-government to cultural diversity and media. Two years later in Tunis, the agenda added a sharper political edge: it defined Internet governance, launched the Internet Governance Forum (IGF), and made clear that the Internet should remain open, global, and interoperable. The Zero Draft stays faithful to this DNA. It reaffirms the people-centred vision, but this time with an explicit anchoring in human rights and the UDHR. In an era where digital technologies are being used to censor dissent, monitor citizens, and spread disinformation, the clear linkage between WSIS implementation and universal human rights norms is a necessary bulwark. It also repeats Tunis’ Internet governance definition and makes the anti-fragmentation ethos even more explicit, rejecting “state-controlled or fragmented Internet architectures.” Given today’s splinternet pressures—from data localization to sovereign DNS schemes—that bright-line principle matters. But the Draft doesn’t just copy-paste the past. It adds upgrades Geneva and Tunis couldn’t deliver:
Comparing to WSIS+10 (2015): From Renewal to Permanence In 2015, the UN General Assembly reviewed WSIS after a decade. The big outcome then was to renew the IGF’s mandate for another 10 years. There were also calls for better coordination, stronger linkages with the Sustainable Development Goals (SDGs), and improved measurement of progress. The Zero Draft moves decisively beyond renewal. It locks in IGF permanence and ties its outputs to formal reporting streams—CSTD, ECOSOC, and the new GDC review cycle. Action Lines, too, are upgraded from “encouraged indicators” to mandatory roadmaps with metrics. This is more institutional consolidation than the UN usually manages. Still, one thing hasn’t changed: enhanced cooperation remains fuzzy. Tunis tasked the system with finding ways for governments to cooperate more effectively on public policy issues. A decade and a half later, we’re still “recalling” working groups and “reaffirming” the principle without clarifying who convenes, on what authority, or to what end. The Zero Draft punts again. WSIS and the Global Digital Compact: Who Leads, Who Follows? The Global Digital Compact (GDC), agreed in 2024 as part of the Pact for the Future, is billed as the UN’s new umbrella framework for digital governance. But two decades on, WSIS remains the only process with both a developmental mandate and a governance track record. The Zero Draft makes that clear: it is WSIS—not the GDC—that anchors the vision of a people-centred, rights-based Information Society. Far from being made redundant, WSIS provides the substance that the GDC needs. The Action Lines already cover connectivity, skills, e-government, media, and cultural diversity. The Tunis Agenda gave us the IGF. The Geneva and Tunis principles enshrined openness, inclusivity, and universality. The GDC, by contrast, is still a framework in search of content. In that sense, the Compact should be building on WSIS foundations—not the other way around. The Zero Draft reflects this logic by linking the Action Lines with the SDGs and the GDC, embedding WSIS reviews into ECOSOC and CSTD processes, and feeding outcomes into the 2027 high-level GDC review. That’s a sensible sequencing: WSIS generates the substance, the GDC aggregates it. Still, there are real risks.
If the UN wants coherence rather than competition, the direction is obvious: WSIS provides the tested architecture; the GDC should follow its lead. What’s Good in the Zero Draft
What’s Weak or Missing
The Issues to Watch As negotiations unfold, here are the red-flag areas that need tightening:
The Geopolitical Angle Of course, all of this won’t play out in a vacuum. The WSIS+20 review will unfold in a geopolitical context that is far more polarized than in 2003, 2005 or 2015.
In short, WSIS+20 could become another stage where digital geopolitics play out: the U.S. guarding against multilateral mission creep, China presenting itself as development partner-in-chief, and Europe pushing values without resources. Against that backdrop, the Zero Draft’s commitment to human rights, the UDHR, and IGF permanence should be commended—not just as bureaucratic achievements, but as political wins in a difficult moment. And credit where it’s due: in this fractured geopolitical climate, the co-facilitators deserve recognition for producing a coherent zero draft at all—one that not only listens to community inputs (especially on IGF permanence) but also has the courage to stand on human rights when many states would rather look away. The United Nations has just declared its intent to shape the future of artificial intelligence. In Resolution A/79/L.118, the General Assembly established two shiny new initiatives: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance.
On paper, this looks like a milestone—“the world comes together to govern AI.” In practice, it’s another example of the UN doing what it does best: launching elaborate processes that generate legitimacy without delivering power. The resolution is a masterclass in symbolism, not substance. The UN’s IPCC for AI—But Declawed The resolution’s crown jewel is the Independent International Scientific Panel on AI, a 40-member body tasked with producing annual reports on the risks and opportunities of artificial intelligence. Its design borrows directly from the IPCC, the climate panel that has spent decades distilling climate science into global consensus reports. But here’s the catch: the AI Panel is forbidden from making policy prescriptions. Its mandate is to be “policy-relevant but non-prescriptive.” Translation: it can tell you the house is on fire but cannot recommend calling the fire department. That’s not governance—that’s commentary. And then comes the real kicker: military AI is explicitly excluded. The deadliest and most destabilizing applications of AI—autonomous weapons, drone swarms, battlefield decision systems—are off the table. The UN has built a nuclear oversight body that refuses to talk about bombs. This exclusion alone guts the credibility of the entire exercise. The Global Dialogue: A Diplomatic Talk Shop The second half of the resolution is the Global Dialogue on AI Governance, a two-day annual meeting alternating between New York and Geneva. Governments, corporations, academics, and NGOs will come together to “share best practices,” “exchange views,” and draft “summaries.” It’s the classic UN formula: convene everyone, offend no one, produce glossy reports, and delay actual decisions. While Brussels is passing the AI Act, Washington is issuing executive orders, and Beijing is embedding AI into its governance and security systems, the UN is promising two days of speeches, panels, and PowerPoint presentations. This is process fetishism at its most refined—talking about talking, enshrined in diplomatic language, while the real regulatory battles are fought elsewhere. The North–South Rhetoric, Without Redistribution To its credit, the resolution acknowledges the growing divide between AI “haves” and “have-nots.” It stresses the need to close digital gaps, build capacity in developing countries, and ensure global representation. But there’s no serious funding mechanism. The entire system relies on voluntary contributions—from governments, tech giants, financial institutions, and philanthropic foundations. In practice, this means Big Tech can bankroll the very process that purports to scrutinize it. That is not multilateralism; it’s regulatory capture dressed up as inclusivity. The Wrong Venue for AI Governance It took member states eight months just to agree on the fine print of a single slice of the Global Digital Compact—the AI section. That pace isn’t diplomacy; it’s paralysis. And it’s a flashing red warning light: the UN is the wrong place to govern a technology that reinvents itself in weeks while diplomats argue for years. The problem isn’t just speed. It’s structure. The UN General Assembly thrives on consensus around universal problems—like climate change, where physics leaves no room for geopolitics. AI is the opposite. It’s not a shared challenge; it’s a strategic weapon. For Washington, AI means “trusted ecosystems” and export controls to blunt Beijing’s rise. For Beijing, it’s “digital sovereignty” and state power through surveillance and industrial dominance. These worldviews don’t meet in the middle—they clash. Trying to regulate AI at the UN is like trying to build a Formula 1 car by committee at a town hall meeting: the process is slow, the arguments endless, and the end product guaranteed to be obsolete before it ever leaves the garage. Put the U.S. and China in the same UN room, add 190 other states and a phalanx of Big Tech lobbyists, and the outcome is predictable: not rules, not enforcement, not leadership—just lowest-common-denominator statements that disguise a contest for power as consensus. The American Retreat, the Chinese Opportunity The United States has largely stepped back from the UN as a serious arena for tech governance. Successive administrations have been skeptical of ceding authority to multilateral bodies, preferring coalitions of like-minded states (the OECD, the G7, or the nascent US–EU Trade and Technology Council). Washington will not allow the General Assembly to dictate rules that bind Silicon Valley. China, by contrast, thrives in UN processes. It has mastered the art of incremental influence: securing leadership positions in technical agencies, shaping language in resolutions, and embedding its preferred concepts—“digital sovereignty,” “development first,” “non-interference”—into the multilateral bloodstream. Resolution A/79/L.118 is fertile ground for this. By excluding military AI, the resolution sidesteps the area where Beijing faces the most scrutiny. By emphasizing capacity-building and equity, it opens the door for China to present itself as the champion of the Global South, offering AI partnerships through its Belt and Road framework. Meanwhile, the voluntary funding model leaves plenty of room for Chinese-backed foundations and companies to bankroll participation, especially from developing states. The result? The UN’s AI process risks becoming a stage where China amplifies its narrative of “responsible, state-led AI,” while the United States watches from the sidelines. Why Symbolism Still Matters And yet, dismissing the resolution outright would be a mistake. Symbolism has power. By creating a standing Panel and a recurring Dialogue, the UN has institutionalized AI governance in the international system. Even if the reports are toothless, they will be quoted in policy debates, cited by activists, and featured in headlines. Norms often start as soft, non-binding ideas before hardening into law. The IPCC was derided as toothless when it was created in 1988. Three decades later, its reports drive climate policy worldwide. Something similar could happen with AI—though the UN’s starting point is far weaker. The Bottom Line Resolution A/79/L.118 is visionary in appearance, hollow in substance. It builds a global AI “talking machine”—a panel to pontificate, a dialogue to deliberate—while dodging the most urgent issues: military AI, binding standards, and sustainable funding. The UN wants to prove it is still relevant in the digital age. But the reality is stark: the future of AI governance will not be decided in the General Assembly. It will be decided in Washington, Beijing, Brussels, and in the boardrooms of a handful of companies that control the technology. If the world is looking for a serious AI regulator, this is not it. If it’s looking for a stage where great power rivalry, corporate lobbying, and global South frustrations collide—this is exactly it. ------------------------------ The timeline of rollout for the resolution:
|
Categories
All
|