|
Digital sovereignty has become one of the defining policy debates of the European decade. It sits at the intersection of geopolitics, industrial policy, cybersecurity, and constitutional values. Yet despite its prominence, the term remains contested. For some, sovereignty means control over infrastructure and data — preferably within national or European borders. For others, it means the ability to act independently in a world of technological interdependence. The distinction is not semantic. It shapes how Europe regulates platforms, designs cloud policy, and positions itself between the United States and China.
Cloud computing lies at the heart of this debate. The global market is heavily concentrated in the hands of a few hyperscalers and their dominance is not merely commercial; it is infrastructural. They provide the backbone for public administrations, hospitals, financial institutions, universities, and increasingly for artificial intelligence systems. Their vertically integrated ecosystems — compute, storage, identity, AI tooling, proprietary databases — generate immense efficiencies. They also create structural dependencies. Once deeply embedded in one environment, switching becomes technically complex and economically costly. Vendor lock-in remains a central feature of the cloud economy. The European Union’s response to this concentration has taken multiple forms. Through legislative initiatives such as the Data Governance Act and the Data Act, and through cybersecurity certification schemes, the European Commission has attempted to strengthen data portability, clarify governance rules, and reduce asymmetries of power. At the same time, the ambition of “technological sovereignty” has inspired industrial initiatives such as GAIA-X — an attempt to create a federated, European-aligned cloud ecosystem grounded in shared standards and values. Yet GAIA-X illustrates both the necessity and the difficulty of sovereignty projects rooted in infrastructure-building alone. Launched with considerable political symbolism, it promised a European alternative to hyperscaler dominance. Over time, however, it struggled with governance complexity, diverging member interests, unclear market incentives, and the paradox of including the very non-European providers it initially sought to counterbalance. The initiative did not fail for lack of ambition; it faltered because constructing parallel infrastructure in a highly capital-intensive and globally integrated market proved extraordinarily difficult. Sovereignty by replication — building European “equivalents” to American or Chinese platforms — encountered structural limits. At the same time, pure market concentration remains deeply problematic. Network effects, economies of scale, and the gravitational pull of proprietary ecosystems risk entrenching a small number of providers as systemic gatekeepers. This concentration raises competition concerns, but also constitutional ones. When digital infrastructure becomes indispensable for public administration, the dependency of democratic institutions on a handful of foreign-headquartered corporations acquires geopolitical significance. The debate is therefore not reducible to antitrust policy; it concerns the architecture of democratic self-government in a digital age. Between enclosure and resignation lies a third path: interoperability. Interoperability reframes sovereignty not as isolation or duplication, but as capacity within interdependence. It emphasizes the ability to move, to connect, to federate, and to exit. In practical terms, interoperability means shared protocols, open standards, portability requirements, and interconnection mechanisms that lower switching costs and prevent structural lock-in. It transforms sovereignty from a static notion of territorial control into a dynamic ability to reconfigure dependencies. There are strong historical precedents for this logic. Consider the standardized shipping container. Before containerization, global trade was slow, fragmented, and vulnerable to theft and delay. The breakthrough was not a new empire or a supranational port authority, but a shared technical specification. Once container dimensions were standardized through international agreement, any compliant ship, crane, truck, or railway could interoperate with any other. Trade volumes expanded dramatically. Yet no state surrendered customs authority, maritime jurisdiction, or regulatory power. Sovereignty did not shrink; it became more operational because infrastructure could connect across borders without political merger. Or take something as seemingly mundane as measurement. Before the spread of the metric system, Europe was a patchwork of local units — feet, ells, stones, and cubits that varied by region. Industrial scaling across borders was nearly impossible. The adoption of the metric system did not abolish national governments. It did not dissolve borders. It created a shared language of quantification. Engineering, science, and trade flourished precisely because sovereign states agreed on interoperable standards while retaining full political independence. Measurement became infrastructure for autonomy, not its negation. The same pattern appears in identity systems. After the First World War, chaotic and inconsistent travel documentation hampered cross-border mobility. Through League of Nations negotiations, passport formats and visa practices were standardized. States retained complete discretion over who could enter their territory. But interoperable documents allowed those sovereign decisions to be recognized and processed by others. Borders did not disappear; they became administratively manageable at scale. Even scientific knowledge offers a parallel. Before the eighteenth century, plants and animals were described differently across languages and empires. The Linnaean system of binomial nomenclature created a universal taxonomy. A species identified in Sweden could be discussed in Spain or Brazil under the same classification. No global government enforced this order. Yet by standardizing naming conventions, sovereign states and scientific institutions could coordinate research and exchange findings across continents. Shared standards amplified intellectual capacity without centralising political authority. One could extend the analogy further. The Gregorian calendar gradually replaced a patchwork of incompatible dating systems across Europe and beyond. States adopted a common temporal framework for civil purposes while maintaining religious and political autonomy. Commerce, diplomacy, and navigation benefited from synchronized timekeeping. Again, sovereignty persisted — but it functioned within a shared temporal infrastructure. Across these examples — shipping containers, measurement systems, passports, taxonomies, calendars,— a consistent lesson emerges. Interoperability does not dissolve sovereignty. It enables it to operate within networks. Shared standards reduce friction, lower coordination costs, and create exit options without requiring political unification. The same principle applies to digital infrastructure. When cloud systems are sealed silos, sovereignty becomes fragile: dependency hardens, mobility declines, and strategic options narrow. But when systems are designed to interoperate — through shared protocols, portability standards, and governed interconnection — sovereign actors retain the ability to choose, to shift, and to coordinate. Just as containerization did not create a world state, and the metric system did not abolish national law, interoperable cloud architectures need not erase jurisdictional authority. Historically, the most resilient political orders were not those that isolated themselves from networks, but those that shaped the standards under which networks operated. Interoperability has repeatedly functioned as a quiet but decisive instrument of sovereign capacity. In the digital age, it may once again prove to be the difference between dependency and democratic control. The internet itself offers a more recent example. Its foundational protocols — TCP/IP, SMTP, HTTP — were designed as interoperable standards rather than proprietary enclosures. No single actor “owned” email or the web. This openness allowed innovation, competition, and global scaling while enabling states to regulate services built atop the network. The democratic potential of the internet was not guaranteed, but its interoperable architecture prevented early monopolisation at the protocol layer. Cloud computing evolved differently. Hyperscalers built vertically integrated environments optimized for performance and security, but not for portability. The economic logic is understandable: differentiation and integration drive competitive advantage. Yet this architecture complicates sovereign choice. When data gravity, proprietary AI services, and bespoke integrations accumulate within one provider, exit becomes economically irrational even if legally permissible. Recent technical developments suggest a subtle adjustment. When a provider such as Amazon Web Services introduces managed multicloud connectivity — beginning with interconnection to Google Cloud — it does not eliminate concentration. Nor does it dissolve lock-in. But it reduces friction in cross-cloud architectures. It makes distributing workloads across providers somewhat more feasible. Interoperability becomes part of the product offering rather than an adversarial afterthought. This shift is modest but politically significant. It reflects pressure from enterprise customers, regulators, and public-sector procurement policies that increasingly demand resilience and optionality. In Europe, sovereignty debates have amplified this demand. Public institutions now ask not only where data resides, but how easily it can be moved. Exit options become a measure of autonomy. The value of interoperability becomes clearer when contrasted with enclosure-based sovereignty models. Building national or regional “fortress clouds” may promise control, but it risks fragmentation, reduced economies of scale, and diminished innovation capacity. It can also entrench domestic monopolies. Democratic sovereignty requires more than jurisdictional boundaries; it requires pluralism, contestability, and resilience. Systems that cannot interoperate are brittle. They trap users as effectively as foreign platforms do. At the same time, interoperability is not a panacea. Without governance, it can entrench dominant actors by allowing them to define standards. Without competition enforcement, portability rights may exist only on paper. Without cybersecurity coordination, interconnection can increase attack surfaces. Democratic interoperability therefore demands institutional scaffolding: certification schemes, enforceable data portability rights, competition oversight, and transparent standards-setting processes. Here the European Union holds structural leverage. As a large regulatory market, it can condition access on interoperability requirements. It can mandate technical documentation, standardized APIs, and data export formats. It can align procurement rules with portability criteria. In doing so, it shifts the sovereignty debate from ownership to rules of engagement. The critical question is not whether Europe should build its own hyperscalers — a goal that has thus far proven elusive — but whether it can shape the interoperability environment in which all providers operate. A democratic theory of digital sovereignty therefore rests on three pillars. First, concentration must be constrained through competition policy and transparency. Second, attempts at infrastructural replication must be assessed realistically, avoiding symbolic projects that lack economic viability — a lesson underscored by the struggles of GAIA-X. Third, and most importantly, interoperability must be treated as a constitutional principle of digital governance. Historically, states became stronger not by isolating themselves from networks, but by shaping them. Railways, telegraphs, postal systems, financial clearinghouses — each required common standards to function across borders. Sovereignty survived because it was embedded in interoperable architectures governed by shared rules. The same logic applies to the cloud. Digital sovereignty in a democratic context should not be defined by the thickness of digital walls, but by the credibility of exit options, the enforceability of standards, and the resilience of interconnected systems. Interoperability provides the technical foundation for these qualities. It lowers dependency without demanding isolation. It enables cooperation without surrendering regulation. It turns interdependence into governable structure. Cloud markets remain concentrated, and hyperscalers retain immense structural power. European industrial initiatives have struggled to counterbalance that reality directly. Yet the path forward may not lie in constructing parallel fortresses, but in mandating bridges.Interoperability is not a concession to globalization; it is the mechanism through which democratic societies can shape it. At the 2026 Munich Security Conference, the most explosive revelations weren't found in the predictable headlines regarding tank battalions or NATO budgets, but in the strategic subtext of Ursula von der Leyen’s remarks. For decades, the transatlantic conversation treated technology as a secondary concern—an economic engine to be fueled or a marketplace to be regulated. In Munich, that era officially ended. The digital stack has been reclassified as the central axis of national sovereignty, sitting alongside defense industrial capacity and energy resilience as a fundamental pillar of geopolitical survival. This represents a seismic shift in European thinking: the transition from viewing technology through the lens of consumer rights and privacy to treating it through the cold-eyed calculation of "strategic autonomy."
The realization that hung over the conference was as stark as it was uncomfortable: the infrastructure powering Europe’s economy, its public administration, its defense logistics, and even its democratic discourse is overwhelmingly dependent on non-European providers—primarily American hyperscalers and platforms. While this interdependence is manageable in moments of total political alignment, it transforms into a critical vulnerability during periods of friction. By placing artificial intelligence, cloud infrastructure, and semiconductors on the same strategic level as ammunition production, Europe has signaled that digital dependency is no longer viewed as a benign byproduct of globalization. It is now seen as a structural weakness. This "securitization of the code" suggests that the next decade of European policy will not only be driven by a desire for market efficiency, but also by a desperate pursuit of resilience against foreign political pressures, regardless of whether those pressures originate in Beijing or Washington. Technology as Hard Power When von der Leyen spoke of strengthening Europe’s security architecture, digital technologies were treated not as enabling tools but as instruments of power in their own right. Artificial intelligence, cloud infrastructure, semiconductors, cyber capabilities, and dual-use systems were placed on the same strategic plane as ammunition production and defence supply chains. This matters because it marks a break with Europe’s past posture. For years, digital policy was framed primarily through rights and markets: competition law, privacy, consumer protection. Today, technology has become security policy. It is strategic autonomy. It is geopolitical leverage. This shift reflects a cold assessment of reality. Europe’s dependence on external digital infrastructures constrains its room for manoeuvre. It creates asymmetries of power that are increasingly difficult to reconcile with the idea of political sovereignty. The “securitization of code” now underway is not driven by ideology, but by a growing awareness that infrastructure can be weaponised — intentionally or not. Munich made one thing clear: Europe no longer accepts digital dependence as the default condition of the alliance. A Structural, Not Tactical, Divide This brings into focus the deeper transatlantic fracture. The disagreement over technology is not about individual regulations, fines, or trade irritants. It is about competing philosophies of digital order. Europe’s approach is rooted in the rule of law, institutional oversight, competition safeguards, and the embedding of democratic constraints within technological systems. It reflects a belief shaped by history: that unchecked concentrations of power — whether state or corporate — ultimately corrode democratic governance. The dominant U.S. approach, by contrast, treats scale, speed, and market dominance as strategic assets. American technology firms are increasingly seen not just as private actors, but as extensions of national power in a global competition — particularly with China. Regulation that constrains them is often framed as weakening Western strength. This divergence is no longer theoretical. It is reinforced by the visible convergence of corporate and political power in Washington, where major technology firms are increasingly aligned with national political projects. From a European perspective, this raises an uncomfortable question: where does corporate influence end and state power begin? That question hovered over Munich — largely unspoken, but unmistakable. This Should be the Wake-Up Call for Washington Washington’s growing unease with Europe’s technology sovereignty agenda reflects a fundamental misreading of what is at stake. From the U.S. vantage point, European efforts to build independent cloud capacity, strengthen regulatory frameworks, and invest in domestic digital infrastructure can appear protectionist — even divisive — at a moment when unity against China is emphasized. From Brussels’ perspective, the same moves are about resilience. This is not an “our way or no way” moment. Europe should not seek to replace one form of dependency with isolation, nor to dismantle the transatlantic market. But there is another way — and that alternative must be acknowledged. If alignment cannot be achieved, Europe will increasingly derisk. And derisking, over time, does not remain a technical adjustment. It hardens into structural separation. The irony is that this dynamic mirrors defence. For years, the United States pressed Europe to invest more seriously in its own security. Strategic shocks eventually forced Europe to act. Defence autonomy, once controversial, became accepted — even encouraged — as a pillar of alliance strength. Technology now occupies the same role. If U.S. pressure served as the necessary wake-up call for European defense, then Europe’s burgeoning technology agenda must serve as a reciprocal wake-up call for the United States. Washington can no longer afford to champion a digital doctrine rooted in the fallacy that the market is a self-correcting monolith capable of fixing its own structural flaws. The foundational era of the internet has already demonstrated that a purely laissez-faire approach frequently fails to produce outcomes that sustain or satisfy democratic health. The primary culprit of this market failure is the extreme concentration of power within Big Tech. However, recognizing this reality is not a call to exile the private sector from the digital sphere, nor is it a claim that heavy-handed, top-down regulation is a universal panacea. As with most complex geopolitical challenges, the solution exists in the middle ground. States must architect predictable, proportionate legal frameworks that incentivize the market to drive innovation and economic growth without compromising public interest. Achieving this balance is not a solo mission; close, transparent collaboration between allies is the only viable path forward to ensure the digital future remains both competitive and democratic. What the Silence in Munich Revealed What stood out in Munich was not confrontation, but restraint. There were no explicit accusations. No public airing of disputes over cloud dominance, AI governance, data flows, or competition enforcement. Even comparisons with last year’s more confrontational rhetoric remained understated. But silence can be strategic — and revealing. What was missing was a shared roadmap for digital governance. No clear articulation of how divergent regulatory philosophies will be reconciled. No joint framework for safeguarding democratic norms in AI deployment. No agreement on preventing critical digital infrastructures from becoming instruments of geopolitical pressure. This absence suggests that technology has not yet been fully integrated into the alliance’s strategic imagination in the way defence has been. And that gap is dangerous. The Slippery Line Between Democratic and Authoritarian Tech At the core of Europe’s posture lies a deeper concern, articulated carefully but felt acutely. Technology can reinforce democratic governance — or quietly hollow it out. The difference lies in transparency, accountability, legal oversight, and limits on concentrated power. When digital infrastructures become opaque, centralized, and politically entangled, the line between democratic and authoritarian tendencies becomes dangerously thin. The United States once stood at the forefront of resisting that slide. It championed an open, interoperable Internet. It defended multi-stakeholder governance, shared protocols, and decentralization as strategic advantages. It embedded the rule of law into the digital order it helped create. That legacy is now at risk of erosion — not by external adversaries alone, but by a shift toward technological nationalism and unchecked concentration of power. There Is Still Time — But Not Much The transatlantic relationship is not beyond repair. It has survived disputes over trade, intelligence, defence spending, and energy. What makes this moment different is that technology underpins all of those domains simultaneously. Salvaging the relationship requires recognition, not denial. Europe must continue to articulate that technological sovereignty is about resilience, openness, and interoperability — not decoupling. The United States must recognise that democratic governance of technology strengthens, rather than weakens, strategic competition. The shared infrastructure, shared digital vision, shared norms, and shared understanding of digital governance that once anchored the alliance are now up for negotiation. They can still be preserved — but only if both sides engage seriously. Munich revealed that the alliance is entering a new phase. Defence unity remains essential. But the next frontier of transatlantic cohesion is technological. What went unspoken in Munich is perhaps the most important truth: the future of the alliance may depend less on how many troops are deployed, and more on who governs the code that shapes democratic life. In recent years, “data governance” has moved from a technical policy niche to the center of global political debate. Governments, corporations, civil society groups, and international organizations are now locked in conversations about how data should be collected, processed, shared, and regulated. Yet this debate is no longer just about data management. It is fundamentally about artificial intelligence. As advanced AI systems—particularly generative models—depend on vast datasets to function, questions about data access, control, and cross-border flows have become proxies for deeper struggles over AI governance, economic competitiveness, and geopolitical influence. Countries in the Global South have become increasingly vocal in these discussions, aware that the rules being shaped today will determine whether they are merely sources of raw data or active participants in the AI economy.
Data governance itself is not new; privacy laws, cybersecurity frameworks, and digital trade rules have existed for decades. What is new is its elevation to a strategic priority at the highest levels of government. Over the past year, a UN working group on data governance has been deliberating on principles for global cooperation, reflecting how central the issue has become to multilateral diplomacy and to ongoing processes around digital cooperation and AI governance. The shift signals a recognition that data governance is no longer a narrow regulatory concern—it is now a critical pillar of economic policy, development strategy, and global power in the age of AI. From Data Governance to AI GovernanceHistorically, data governance referred to frameworks ensuring data quality, privacy, security, and ethical use. It was often associated with compliance regimes such as the European Union’s General Data Protection Regulation (GDPR) or sectoral data-sharing standards. These frameworks focused on protecting personal information, enabling trusted data exchanges, and clarifying institutional responsibilities. Today, however, the stakes are higher. AI systems are trained on vast quantities of data, and their performance, bias, safety, and economic value are deeply tied to who controls that data and under what conditions it can be accessed. Data is no longer simply something to be protected or managed—it is the foundational resource of AI-driven economies. In this context, governing data increasingly means governing AI. The availability and diversity of training data determine who can build competitive AI systems and whose languages, cultures, and realities are represented in them. The regulation of cross-border data flows shapes where AI infrastructure can operate and which firms can scale globally. Intellectual property rules influence who captures value from AI-generated outputs, while competition law affects whether data advantages entrench dominant platforms. As a result, debates about data governance are no longer confined to privacy regulators; they now sit at the intersection of industrial policy, trade negotiations, national security strategies, and development agendas. Control over data translates into influence over innovation capacity and geopolitical leverage in the AI era. The surge of attention reflects three converging trends. First, the commercial explosion of generative AI has demonstrated both the transformative potential and the concentration risks of large-scale models. Second, mounting concerns about algorithmic bias, misinformation, labor displacement, and systemic risk have exposed the societal consequences of poorly governed data ecosystems. Third, intensifying geopolitical rivalry—particularly among the United States, China, and the European Union—has elevated AI and data policy into instruments of strategic competition. In this environment, data governance can no longer be treated as a technical compliance exercise. It has become a strategic imperative: a core element of economic resilience, democratic accountability, and global power. Governments now recognize that decisions about data access, standards, and flows will shape not only innovation trajectories but also the distribution of benefits and risks in the AI age. Why the Global South Has So Much at Stake For countries in the Global South, AI governance is not an abstract regulatory issue. It is a question of development, sovereignty, and inclusion. Many of these nations are rich in data—through large, youthful populations and rapidly digitizing economies—but poor in computational infrastructure and capital. If global AI governance rules are set without their meaningful participation, they risk becoming mere suppliers of raw data to foreign technology giants, replicating extractive patterns reminiscent of colonial resource economies. First, there is the economic dimension. AI is projected to contribute trillions of dollars to the global economy. If value creation is concentrated in a handful of countries that control data infrastructure, cloud computing, and foundational models, the development gap between North and South may widen. Countries in Africa, Latin America, South Asia, and Southeast Asia therefore seek Second, there is the cultural and linguistic dimension. AI systems trained predominantly on Western datasets often perform poorly in underrepresented languages and contexts. This creates digital exclusion. Ensuring diverse, representative datasets is not merely a technical matter but a matter of cultural preservation and democratic participation. Countries in the Global South want governance structures that prevent their societies from being misrepresented—or entirely absent—in the AI systems that increasingly mediate information and services. Third, there is the issue of regulatory sovereignty. Many developing countries fear being forced to adopt standards designed elsewhere—whether American market-driven models, European rights-based approaches, or Chinese state-centric frameworks. They seek a voice in shaping norms that balance innovation, equity, and human rights in ways aligned with their own social and economic priorities. Complexity and Misunderstanding Despite its urgency, data governance remains deeply complex and frequently misunderstood. One of the most persistent misconceptions is the tendency to equate data governance with data localisation—the requirement that data be stored or processed within national borders. While localisation is often presented as a straightforward assertion of sovereignty, it is at best a narrow policy instrument and at worst a distraction from the deeper structural challenges of governing data and AI in an interconnected world. Data governance is inherently multi-layered. It spans privacy protection, cybersecurity, competition policy, intellectual property, algorithmic accountability, cross-border data transfers, and trade obligations. AI governance introduces further dimensions: model transparency, safety testing, risk classification, auditing, liability for harm, and systemic risk management. These domains intersect in complicated and sometimes contradictory ways. For example, stringent privacy protections may restrict the availability of data for AI training; open data initiatives may clash with intellectual property regimes; competition policy may be needed to prevent data advantages from entrenching dominant firms. Reducing this complexity to a territorial question of “where data sits” fundamentally misdiagnoses the problem. Data localisation is often framed as a tool for enhancing sovereignty, national security, or economic development. In reality, it tends to promote closed systems rather than collaborative ecosystems. By privileging territorial control over interoperability, localisation fragments the global digital environment into silos. It runs counter to the spirit of openness, shared standards, and cross-border innovation that has historically driven the growth of the internet and the digital economy. AI development, in particular, depends on diverse, high-quality datasets and distributed research collaboration. Artificially confining data within national borders risks narrowing datasets, reducing model performance, and isolating domestic researchers and firms from global networks. Moreover, localisation frequently offers only a short-term political signal rather than a durable solution. Storing data domestically does not automatically ensure meaningful control over it. Foreign technology companies can still access, analyze, and monetize locally stored data through contractual arrangements, cloud partnerships, or subsidiary structures. Without strong competition policy, regulatory capacity, and technical infrastructure, localisation alone does little to rebalance power in digital markets. In the long run, the economic costs can be significant. Localisation requirements can raise compliance and infrastructure costs for startups and small firms, limiting their ability to scale internationally. They can deter foreign investment, complicate cross-border service provision, and invite retaliatory trade measures. For developing economies seeking to integrate into global digital value chains, such fragmentation can reduce competitiveness and innovation potential. Citizens may ultimately bear the cost through higher prices, reduced access to digital services, and slower technological progress. Equating data governance with localisation also obscures the broader structural challenge: how to ensure that countries retain meaningful agency over data generated within their borders while remaining connected to the global digital economy. True sovereignty in the AI era is not about isolation; it is about capacity—regulatory, technical, and institutional. Effective governance requires nuanced and forward-looking solutions: interoperable regulatory standards that enable trusted data flows; data trusts and cooperative governance models that embed accountability; strong competition enforcement to prevent data monopolies; and equitable data-sharing frameworks that support development and innovation. Data localisation may appear decisive, but it ultimately entrenches fragmentation and inefficiency. A sustainable approach to data and AI governance must move beyond territorial reflexes toward cooperative, interoperable systems that balance openness with accountability. Trade at the Center of the Debate Today’s debate over data and AI governance cannot be separated from the turbulent global trade landscape. Trade is no longer a neutral backdrop to digital policy; it is the arena in which many of these battles are being fought. Rising tariffs, export controls, sanctions, and digital trade disputes have reshaped the environment in which rules on data flows and AI are negotiated. From semiconductor export restrictions imposed by the United States on China, to retaliatory tariffs in broader technology disputes, to disagreements at the World Trade Organization over e-commerce rules, digital governance has become entangled with economic statecraft. Modern trade agreements increasingly include binding provisions on digital trade: guarantees for cross-border data flows, limits on data localisation requirements, protections for source code, and constraints on customs duties on electronic transmissions. These rules are not abstract—they shape the regulatory autonomy of states. For example, debates within the WTO’s Joint Statement Initiative on E-commerce have centered on whether countries can require local data storage or restrict transfers for public policy purposes. Meanwhile, disputes over digital services taxes—such as those introduced by several European countries and contested by the United States with threats of retaliatory tariffs—demonstrate how digital economy governance quickly escalates into broader trade conflict. Even outside strictly digital sectors, the imposition of tariffs on technology products and the use of export controls on advanced chips underscore how AI supply chains are deeply embedded in trade geopolitics. For countries in the Global South, this environment creates acute strategic dilemmas. On one hand, committing to open data flows and strong digital trade disciplines may attract investment and integration into global value chains. On the other, locking in such commitments through trade agreements may reduce policy space precisely when governments are trying to build domestic AI industries, develop digital infrastructure, or address data-driven harms. The tension between openness and sovereignty is no longer theoretical—it is unfolding in real time, under conditions of trade fragmentation and geopolitical rivalry. Developed economies often promote the principle of the “free flow of data with trust,” arguing that seamless data transfers are essential for innovation and economic growth. Yet recent trade conflicts reveal how asymmetrical the system can be. In practice, large multinational technology firms headquartered in a few advanced economies dominate cloud infrastructure, AI model development, and platform ecosystems. Open data flows without complementary competition policy or industrial support measures can enable value extraction from developing markets, with data collected locally but monetized abroad. When trade rules entrench these patterns, they risk constraining digital industrialization strategies in emerging economies. At the same time, retreating into protectionism carries its own risks. Sweeping localisation mandates or digital trade restrictions can increase costs, discourage cross-border collaboration, and invite retaliatory tariffs or exclusion from trade agreements. The broader trend toward tariff escalation and supply chain “de-risking” shows how quickly fragmentation can spread, harming smaller economies that depend on global integration. The challenge, therefore, is not to abandon trade frameworks but to rethink them. Digital trade disciplines must better reflect development realities, incorporating safeguards for legitimate public policy objectives, flexibility for emerging regulatory models, and commitments to capacity-building and technology transfer. In a world where tariffs, export controls, and digital trade rules are increasingly intertwined, AI governance is inseparable from trade governance. The question is no longer whether trade will shape the future of data and AI—but whose interests those trade rules will ultimately serve. Toward a Balanced and Inclusive Approach Addressing the complexities of AI governance requires a multi-layered strategy. First, global governance forums must become more inclusive. Institutions such as the United Nations, the G20, and regional bodies should ensure meaningful participation from developing countries, not merely as rule-takers but as co-authors of norms. This includes technical and financial assistance to strengthen regulatory capacity and negotiating power. Second, governance frameworks should move beyond binary debates about openness versus restriction. A principles-based approach—centered on transparency, accountability, fairness, and interoperability—can allow diverse regulatory models to coexist while maintaining global cooperation. Mechanisms for regulatory equivalence, rather than uniformity, may enable cross-border data flows without sacrificing domestic priorities. Third, trade policy must be aligned with development goals. Countries should negotiate digital trade provisions that preserve policy space for public-interest regulation, including competition oversight and AI risk management. Provisions supporting digital infrastructure investment and local innovation ecosystems are essential to prevent further concentration of AI capabilities. Finally, capacity-building is critical. Without domestic expertise in AI, cybersecurity, and digital law, even the most carefully negotiated governance frameworks will fail. International cooperation should therefore prioritize knowledge-sharing, open research collaborations, and equitable access to computing resources. Conclusion The sudden prominence of data governance reflects a deeper transformation: the realization that data is the lifeblood of AI, and AI is reshaping economies and societies. For countries in the Global South, the stakes are particularly high. The rules crafted today will determine whether they become passive data providers or active architects of the digital future. Data governance is complex because it sits at the intersection of technology, law, economics, and geopolitics. It is often misunderstood when reduced to simplistic debates about localisation. And it is inseparable from trade, where the distribution of value and power is negotiated in binding agreements. The path forward lies not in isolation or in uncritical openness, but in inclusive, development-oriented governance that balances innovation with equity. As AI continues to evolve, the global conversation about data governance must evolve with it—ensuring that the future of intelligence is shaped not by a few, but by many. European capitals are right to be blunt: the idea that Europe can simply “delete” U.S. technology from its digital ecosystem is neither realistic nor desirable. As reported in Politico, policymakers increasingly acknowledge that European economies, public administrations and security infrastructures are deeply entangled with American cloud services, software stacks, semiconductors and platforms. These dependencies are not accidental, nor are they the product of European naïveté alone; they are the outcome of decades of innovation cycles, scale effects, and geopolitical alignment within the transatlantic space.
Yet realism should not be mistaken for resignation. Europe is neither numb to these dependencies nor condemned to permanent passivity. The real failure so far has not been dependency itself, but the absence of a coherent strategy for understanding, governing and gradually reshaping it. If Europe is serious about security—whether in relation to Russia, unstable neighbours, or internal resilience—it must move beyond slogans and defensive reflexes and begin doing the harder work of strategic clarity. Step One: Understanding What Dependency Really Means The first task is intellectual, not technological. Europe still lacks a granular, shared understanding of what its digital dependencies actually are. “Dependence on U.S. tech” is often treated as a monolith, when in reality it spans very different layers: cloud infrastructure, operating systems, development tools, cybersecurity services, AI models, data governance frameworks, and even tacit dependencies such as skills pipelines and venture capital norms. Some dependencies are shallow and substitutable; others are deep, systemic and reinforced by network effects. Some are commercially inconvenient but strategically tolerable; others have direct implications for national security, intelligence autonomy and crisis response. Treating all of them as equally problematic leads either to paralysis or to performative policy. From a security perspective, this distinction matters enormously. In a crisis scenario—say, heightened tensions with Russia or instability in the Eastern Mediterranean—Europe’s reliance on foreign-controlled digital infrastructure could become a strategic vulnerability. This is not only about espionage or data access. It is about continuity of service, legal jurisdiction, update control, and the ability to adapt systems rapidly under stress. Security today is not just about tanks and borders; it is about whether digital systems can be trusted to function predictably in moments of geopolitical friction. Mapping these dependencies—where they exist, how deeply embedded they are, and how feasible it is to override them—should be treated as a core security exercise, not a technocratic afterthought. This mapping should also be shared across member states, because asymmetric dependencies create internal EU fragilities. A union is only as resilient as its weakest digital link. Step Two: From Defensive Posture to Strategic Direction The second step is decisional. Europe has spent the past decade reacting—regulating platforms, blocking mergers, erecting defensive legal frameworks—without articulating a clear sense of where it actually wants to go. The term “digital sovereignty” has become a catch-all that signals concern without conveying intent. Sovereignty over what, exactly? For whom? And to what end? Increasingly, digital sovereignty is being associated with open source, and this is a welcome development. Open source reduces lock-in, increases transparency, and allows for collective scrutiny—important virtues in a security-conscious environment. However, open source is not a panacea. Code that is open but poorly governed, underfunded or fragmented can be just as fragile as proprietary alternatives. Moreover, open source alone does not solve questions of scale, liability, or long-term maintenance. A more mature strategy would treat open source as one pillar among several. Europe should also actively promote open standards and protocols, insisting on interoperability as a default condition in both public procurement and regulation. Interoperability is not just a competition tool; it is a security mechanism. Systems that can be swapped, recombined and reconfigured are harder to coerce and easier to defend. Decentralised architectures deserve particular attention. In a geopolitical environment marked by hybrid threats, centralisation is a liability. Decentralised systems—whether in data storage, identity management or communications—reduce single points of failure and make large-scale disruption more difficult. For countries with “tricky neighbours,” such as Greece or the Baltic states, this kind of architectural resilience is not theoretical; it is existential. Security Beyond Russia: The Full Spectrum While Russia understandably dominates European security thinking, a broader lens is needed. Europe’s digital vulnerabilities intersect with migration pressures, energy dependencies, supply-chain disruptions, and internal political polarisation. In the Eastern Mediterranean, for example, digital infrastructure has become entangled with energy exploration, maritime surveillance and military posturing. In such contexts, dependence on external digital systems can constrain diplomatic and military flexibility. Cybersecurity itself is no longer a niche domain. Ransomware attacks on hospitals, interference with electoral infrastructure, and manipulation of information spaces all sit at the intersection of digital dependency and societal security. Europe’s response cannot rely solely on regulation and incident response; it must include structural choices about how systems are built and governed. Creating the Right Incentives—and the Right Governance None of this will happen without investment, and investment will not flow into an environment perceived as hostile, unpredictable or ideologically confused. Europe needs a framework that is predictable, proportionate, rights-based and consistent—not just across policy areas, but across time. Constant regulatory churn may feel active, but it discourages long-term commitment. In addition to funding and regulation, Europe should experiment with new incentives: security-weighted public procurement, long-term public-private partnerships for critical digital infrastructure, and “resilience premiums” that reward architectures designed for interoperability and decentralisation. These tools would signal that Europe values not just innovation, but durable and secure innovation. Finally, governance matters. Europe’s current digital governance is fragmented across institutions, policy silos and national competencies. A credible strategy would require a standing governance structure that brings together security agencies, digital regulators, industrial policy actors and foreign policy expertise. This body should not micro-manage technology, but it should set priorities, coordinate dependency assessments, and stress-test Europe’s digital ecosystem against plausible geopolitical scenarios. Crucially, governance must also include mechanisms for learning and adaptation. Digital security is not static. A system that is resilient today may be brittle tomorrow. Europe’s strength should lie in its ability to adjust collectively, rather than to cling to fixed models. Conclusion European capitals are correct: cutting off U.S. technology is neither realistic nor necessary. But accepting dependency without strategy is equally untenable. In a world of renewed geopolitical tension and hybrid threats, Europe’s digital choices are security choices. The path forward is not technological autarky, but strategic intentionality—grounded in a clear understanding of dependencies, a coherent vision of where Europe wants to go, and governance structures capable of turning that vision into reality. Across Europe, concern about the online environment is intensifying. Digital platforms are increasingly associated with the spread of disinformation, political polarisation, and social harms—particularly for children. In a context marked by geopolitical uncertainty, eroding trust in institutions, and fragmented public debate, it is understandable that governments feel compelled to respond. The issue is no longer whether some form of regulation is needed, but how such measures are conceived, justified, and put into practice.
In recent years, attention has often centred on the power of large technology companies, whose scale and influence over public discourse are unprecedented. While these concerns are well founded, there is a parallel risk that regulation becomes a vehicle for broader state intervention in the digital public sphere. Under the language of democracy, security, or national autonomy, efforts to address genuine problems online may slide into attempts to exert greater control over speech and information flows. The challenge, therefore, lies in developing regulatory frameworks that meaningfully address online harms without undermining the open and pluralistic character of democratic debate. Spain’s Prime Minister Pedro Sánchez has become one of the most prominent advocates of a hard-line approach. His proposals—including ending anonymity on social media, holding platform executives criminally liable for illegal or “hateful” content, criminalising certain forms of algorithmic amplification, and adopting a “zero-tolerance” enforcement posture—are framed as necessary responses to a digital environment he has described as a “failed state.” The stated objective is to protect democracy and society, particularly the most vulnerable. These concerns should not be dismissed. Children do face real harms online: exposure to abuse, predatory behaviour, harassment, self-harm content, and manipulative design practices that exploit their attention. Marginalised communities are disproportionately targeted by online hate and coordinated harassment. Platforms have often been slow, inconsistent, or opaque in responding. Governments are right to demand higher standards of care, transparency, and responsibility. At the same time, regulation that focuses primarily on punitive control risks overshooting its target. When policies emphasise criminal liability, prosecutorial investigations, and broad content restrictions, their effects rarely stop with powerful tech executives. They cascade down to millions of ordinary users—people discussing immigration, foreign policy, public health, religion, or identity. These are precisely the topics that become most sensitive during periods of political uncertainty and social change. In such contexts, the line between combating harm and constraining legitimate democratic disagreement can become dangerously thin. Ending anonymity, for example, may reduce some forms of abuse, but it also removes a vital layer of protection for whistleblowers, political dissidents, journalists, survivors of violence, LGBTQ+ youth, and members of ethnic or religious minorities. For many, anonymity is not about evading responsibility; it is about participating at all. Any policy that treats anonymity primarily as a problem risks silencing voices that democracy most needs to hear. Similarly, holding executives personally criminally liable for content decisions may create powerful incentives—but not necessarily the right ones. Faced with the risk of prosecution, platforms are likely to default to over-removal, automated filtering, and risk-averse moderation. This risk is not merely theoretical; Europe has seen it before. This is not an argument against regulation, but a reminder that it must be designed carefully. When Germany introduced the Network Enforcement Act (NetzDG), which imposed significant fines for failing to remove illegal content within short timeframes, platforms responded by erring on the side of caution. Numerous lawful posts—including satire, political commentary, and journalistic content—were removed or blocked because platforms prioritised legal risk reduction over contextual judgment. Similar dynamics emerged following the introduction of Article 17 of the EU Copyright Directive, where automated upload filters led to the removal or blocking of lawful material such as memes, parodies, and educational content. These risks were sufficiently significant that the Court of Justice of the European Union intervened to clarify and limit the scope of such filtering obligations, emphasising that any implementation must respect fundamental rights, including freedom of expression and information. These examples illustrate how heightened liability and unclear standards can incentivise over-removal, automated filtering, and risk-averse moderation. While such approaches may reduce visible controversy, they also suppress lawful speech and disproportionately affect minority voices and political dissent. The resulting chilling effect is difficult to quantify, but its impact on democratic participation and public debate is real and enduring. Europe’s historical experience makes this tension particularly salient. Countries like Spain know intimately what it means to live under systems where speech is tightly controlled in the name of order, unity, or national interest. That legacy has shaped Europe’s strong commitment to fundamental rights, proportionality, and the understanding that democracy depends not only on security, but on pluralism and open debate. Yet this concern is not only historical. Over the coming years, several European countries—including Hungary, France, Germany, Italy, Spain, Poland, and Greece—will hold elections that may significantly reshape their political landscapes, as voters weigh competing visions of governance, identity, and democratic norms. Regulatory powers designed today under centrist or liberal administrations may look very different if exercised by future governments with a more exclusionary or authoritarian approach to dissent. Measures introduced in the name of protecting democracy can, under changed political circumstances, become tools for narrowing it. This reality underscores the need to design digital regulation with long-term resilience in mind—grounded in rights, safeguards, and institutional restraint, rather than trust in the intentions of any one government. This is why the current framing of “digital sovereignty” deserves careful scrutiny. Once understood primarily as a strategy for technological resilience and strategic autonomy, it is increasingly politicised as a justification for assertive state intervention in online discourse. At times, digital sovereignty is presented as inherently at odds with open, transnational communication—despite the fact that democracy itself has always relied on cross-border flows of ideas, information, and innovation. This tension between state control and open communication is most visible in the debate over child safety, where 'digital sovereignty' is frequently invoked as a shield for restrictive policies. But, children deserve more than symbolic protection. They need safer digital environments, but they also need to be empowered—to learn, explore, create, and participate. This requires age-appropriate design, meaningful transparency, digital literacy, robust reporting mechanisms, and enforceable duties of care. It does not require turning the internet into a heavily policed space where speech is filtered primarily through fear of punishment. Crucially, children also grow into citizens. Protecting them should not come at the cost of hollowing out the democratic culture they will inherit. An online environment stripped of contestation, anonymity, and diversity of expression may be calmer—but it will also be poorer, less resilient, and less capable of absorbing social conflict without repression. The same applies to marginalised communities. Regulation that prioritises order over rights often ends up reinforcing existing power imbalances. Groups that already face discrimination offline are frequently the first to feel the effects of broad speech controls online. A rights-based approach must therefore be central, not incidental, to digital governance. None of this implies that Europe should be passive or naïve. Platforms must be held to account—but accountability should be precise, transparent, and proportionate. It should focus on systems and incentives rather than individual speech acts; on due process rather than zero-tolerance rhetoric; on empowerment rather than control. This is not the moment for Europe to “show its teeth” by asserting authority over digital discourse in ways that blur the line between regulation and repression. It is the moment to show confidence: confidence that democratic societies can address harm without sacrificing fundamental freedoms, that children can be protected without being over-shielded, and that innovation and rights can coexist. The challenge of the digital age is not simply taming Big Tech. It is learning how to govern a pluralistic, networked public sphere without turning fear into a substitute for judgment. Europe’s strength has always been its commitment to balance. That commitment is needed now more than ever. Europe is undergoing a visible shift in mood. What was once an abstract debate about “strategic autonomy” has turned into a series of concrete political decisions, particularly in the technology sector. Concerns about Europe’s dependence on U.S.-based platforms such as Google, Meta, X, and Microsoft have long been present, but they are now translating into action. France plans to ban public officials from using American videoconferencing tools, replacing them with Visio, a service hosted on infrastructure provided by a French company. German lawmakers are debating alternatives to U.S. data analytics software, while members of the European Parliament are urging a broader move away from American software and hardware. Even in traditionally pro-Atlantic countries such as the Netherlands, political pressure is growing to shield sensitive digital infrastructure from foreign control. These developments reflect a growing recognition that digital technologies are no longer neutral utilities but strategic assets. As EU tech commissioner Henna Virkkunen recently noted, Europe has realised that dependence on “one country or one company” for critical technologies creates structural vulnerabilities. The impulse to reduce exposure—to de-risk and, where necessary, decouple from foreign technologies—is therefore understandable and, in some cases, justified. Yet Europe is repeating a familiar error. Rather than undertaking the difficult work of clarifying what digital sovereignty truly entails for its strategic goals, it is relying on piecemeal, defensive fixes. Platforms are replaced, procurement rules tightened, and data flows restricted—all without a coherent framework connecting these moves to a broader vision of power, resilience, and global influence. The outcome is a patchwork that may appear assertive on the surface but is strategically hollow. This weakness is compounded by a second, more subtle error: the assumption that digital sovereignty should primarily be about “building a stronger European tech ecosystem.” While this mantra is politically appealing, it is also myopic. Framed in isolation, it risks turning digital sovereignty into a project of technological self-containment—one that prioritises European origin over interoperability, and autonomy over collaboration. Digital sovereignty should not mean building technology only for Europe. That approach risks siloing the continent, alienating partners, and reducing Europe’s relevance in shaping global digital systems. Europe may be bruised by recent developments in transatlantic relations, particularly the growing unpredictability of its longest-standing and most trusted partner. But disappointment is not a strategy. Retreating into a purely inward-facing digital ecosystem would be a strategic overcorrection. History offers a clear warning. In the 1970s and 1980s, Europe repeatedly attempted to achieve technological sovereignty by nurturing national or regional champions in computing and telecommunications. These initiatives were driven by legitimate concerns: dependence on U.S. firms, fear of strategic vulnerability, and a desire to retain industrial value within Europe. Yet many of these projects ultimately failed—not because Europe lacked engineering talent or political ambition, but because autonomy was pursued at the expense of interoperability, openness, and alignment with global standards. One of the clearest examples is Europe’s fragmented approach to computing in the 1970s. France’s Plan Calcul, launched in 1966, sought to create a sovereign national computer industry capable of competing with IBM. Substantial public investment went into companies such as Bull, with the explicit aim of insulating France from U.S. technological dominance. While technically sophisticated, these systems were largely incompatible with the rapidly emerging global software and hardware ecosystems dominated by U.S. firms. As computing shifted toward standardised architectures and software portability, European systems struggled to adapt. IBM’s open, scalable ecosystem—combined with its ability to set de facto global standards—ultimately prevailed. Europe did not secure sovereignty; it entrenched dependence. A similar pattern emerged in telecommunications switching systems. Throughout the 1970s and 1980s, European countries developed proprietary digital switching technologies—such as France’s E10, Germany’s EWSD, and the UK’s System X—often with heavy state backing and limited concern for cross-border compatibility. These systems worked well domestically, but their lack of interoperability hindered integration and export. While Europe eventually succeeded with GSM—a rare case where coordination, openness, and standard-setting aligned—the earlier fragmentation delayed progress and weakened Europe’s position just as global telecommunications markets were taking shape. The videotex experience offers another cautionary tale. France’s Minitel is often remembered as a technological success, and in many ways it was: widely adopted, user-friendly, and years ahead of its time. But Minitel was also a closed, nationally specific system, built around proprietary standards and tightly controlled by the state. When the open, interoperable architecture of the Internet emerged, Minitel could not adapt. Its success at home became a liability abroad. France lost the opportunity to shape the global digital information ecosystem because it had optimised for domestic control rather than international scalability. Perhaps the most consequential example is Europe’s early approach to networking standards. In the 1980s, European governments and institutions strongly backed the OSI (Open Systems Interconnection) model, viewing it as a sovereign, standards-based alternative to the U.S.-developed TCP/IP protocol suite. OSI was theoretically elegant and institutionally endorsed, but it was slow to implement and disconnected from real-world deployment. TCP/IP, by contrast, spread through academic and commercial collaboration, evolving through use rather than central planning. By the time Europe acknowledged TCP/IP’s dominance, the global Internet’s architecture—and governance—had already been shaped elsewhere. Europe became a rule-taker in a system it might once have led. The common thread across these cases is not failure, but misalignment. European policymakers equated sovereignty with control, and control with closed systems. In doing so, they underestimated the strategic power of openness, interoperability, and early standard-setting. Meanwhile, more collaborative ecosystems—particularly in the United States—allowed markets, developers, and users to coalesce around shared technologies that scaled globally. Those systems became the infrastructure of the digital age, embedding the political, economic, and governance assumptions of their creators. The lesson for today is stark. Digital sovereignty built around exclusion, substitution, or inward-facing industrial policy may deliver short-term reassurance, but it risks long-term irrelevance. In a networked world, influence flows to those who design the systems others rely on. Europe’s historical experience shows that sovereignty without interoperability leads not to independence, but to dependency—only delayed and often more costly. The lesson is not that Europe should avoid building capacity, but that capacity without openness leads to marginalisation. In a deeply interconnected digital world, sovereignty cannot be achieved through isolation. It must be built through the ability to choose dependencies, shape standards, and collaborate from a position of strength. This is where Europe’s current approach falls short. By focusing narrowly on replacing foreign tools with European ones, policymakers risk conflating sovereignty with substitution. Replacing Microsoft Teams with a European videoconferencing platform may address immediate political or legal concerns, but it does little to answer the larger question: how does Europe intend to design digital systems that are resilient, interoperable, and influential beyond its borders? The security implications of this gap are significant. Digital infrastructure underpins critical state functions, from public administration and energy networks to defence logistics and intelligence cooperation. If each member state defines “trusted” or “sovereign” technology differently, Europe risks fragmenting its own digital foundations. Interoperability suffers, cross-border services weaken, and coordinated responses to cyber incidents or hybrid threats become harder to execute. Cloud infrastructure illustrates this danger clearly. A proliferation of national or “sovereign” cloud initiatives, absent a common European and international framework, may reduce dependence on specific foreign providers but at the cost of scale, efficiency, and resilience. In crisis scenarios, fragmented systems could slow information sharing and complicate collective defence. Sovereignty that undermines coordination ultimately weakens security. The same logic applies to artificial intelligence. Regulation can set guardrails, but it cannot substitute for strategic leadership. If Europe focuses primarily on constraining AI within its borders while failing to shape global AI architectures, standards, and governance models, it risks becoming a regulatory island. Influence flows not from restriction alone, but from offering systems that others can adopt, trust, and integrate. This is where Europe must rethink its approach. Digital sovereignty should be about de-risking critical dependencies while actively building an ecosystem that is open by design, interoperable by default, and attractive to partners. Europe does not need a technology stack that works only for Europeans. It needs one that allows Europe to be self-sufficient where it matters, while leading in collaboration where it counts. The answer to the current strain in transatlantic relations is not alienation. Nor is it technological autarky. If Europe still believes in cooperation and in shaping global norms rather than retreating from them, then its digital strategy must reflect that belief. That means investing in shared infrastructure, open standards, and governance models that enable cooperation with like-minded countries—and even structured engagement with those that are not. Until Europe confronts this challenge directly, it will continue to manage digital sovereignty through defensive, piecemeal decisions. It will swap platforms without setting strategy, build walls where bridges are needed, and mistake insulation for influence. Teams may be out, but unless Europe moves beyond substitution and isolation, digital sovereignty will remain more slogan than strategy. The Digital Networks Act (DNA) represents a significant evolution in EU telecommunications policy, moving from a framework of Directives to a directly applicable Regulation. While this shift is intended to harmonize the internal market and strengthen oversight, several aspects of the Act raise concerns – this blog post will only focus on the DNA’s potential effects on net neutrality, interconnection, CDNs, consumer protection and dispute resolution.
1. Net Neutrality and Specialized Services The DNA incorporates and updates elements of the Open Internet Regulation (OIR), the EU’s foundational net neutrality framework.
2. Interconnection and the Sustainability Clause The DNA maintains the principle of commercial negotiation for interconnection but introduces language that may create leverage for large network operators.
3. Content Delivery Networks (CDNs) and Content Providers CDNs and CAPs are increasingly treated as part of the extended connectivity ecosystem, bringing them closer to the regulatory perimeter.
4. Voluntary Dispute Resolution The DNA introduces a voluntary conciliation process for resolving disputes over technical and commercial arrangements.
5. Potential Implications for Consumer Rights The DNA introduces several provisions that may have indirect effects on end-users:
The Digital Networks Act significantly reshapes the EU’s internet regulatory framework by moving from Directives to a directly applicable Regulation. While it aims to support technological innovation, harmonize standards, and improve connectivity, several provisions introduce risks:
Careful implementation, strong regulatory guidance, and ongoing monitoring will be essential to ensure that the DNA’s intended benefits do not inadvertently undermine competition, innovation, or consumer interests. The European Commission is right to insist that critical infrastructure must be secure, resilient, and protected from undue foreign influence. In an era defined by geopolitical rivalry, cyber operations below the threshold of war, and the weaponisation of economic dependencies, it would be negligent not to scrutinise who builds and maintains the systems that keep European societies running.
The Commission’s Cybersecurity Act proposal to exclude “high-risk” foreign suppliers from critical sectors reflects a long-overdue recognition that cybersecurity is not merely a technical problem but a structural and political one. Infrastructure embeds power. Supply chains encode dependency. And digital systems, once deployed, shape the limits of sovereignty far more than abstract declarations ever could. Yet while the premise is sound, the policy as currently articulated leaves too many fundamental questions unanswered. If Europe is serious about securing its infrastructure, it must confront not only who should be excluded, but how, at what cost, and with what consequences for Europe’s place in a deeply interconnected digital world. The Problem of Embedded Dependency One of the most striking gaps in the Commission’s proposal is its limited engagement with legacy infrastructure. European networks — particularly in telecommunications — are not greenfield projects. They are the product of decades of procurement decisions, commercial incentives, and regulatory fragmentation. Equipment from vendors now deemed “high risk” is already embedded deep within core and access networks across multiple member states. Once hardware is embedded, dependency becomes structural. It is not easily reversed without redesigning systems, retraining staff, renegotiating maintenance contracts, and accepting periods of operational risk. This is not an abstract concern: operators have repeatedly warned that forced and accelerated replacement carries significant financial and technical costs, potentially affecting service quality and investment capacity. The Commission’s proposal gestures toward phased implementation, but it does not yet grapple with the political economy of de-risking. Who pays for the transition? Will the EU provide financial support, or will the burden fall disproportionately on operators — and ultimately consumers — in certain member states? Without a credible funding strategy, the policy risks entrenching inequalities between markets rather than strengthening collective resilience. Who Decides What “High Risk” Means? Equally unresolved is the question of classification. The proposed framework would allow the Commission, or a group of member states, to initiate a risk assessment that could lead to supplier exclusion. But the criteria remain deliberately broad: national security concerns, foreign interference, market concentration, and geopolitical context. This ambiguity is politically convenient, but strategically dangerous. Risk is not static. Nor is it confined to adversaries of the moment. If supplier risk is fundamentally tied to state power and political leverage — as the Commission implicitly acknowledges — then today’s trusted partner could become tomorrow’s vulnerability. The uncomfortable but necessary question is this: would Europe ever apply this logic consistently beyond its current focus on China? For decades, the United States has been treated as categorically “low risk,” even as European data, cloud infrastructure, and software ecosystems have become deeply dependent on American companies and subject to U.S. law. Recent geopolitical tensions — including explicit threats tied to territory, trade, or security commitments — illustrate that alliance does not eliminate asymmetry. A credible risk-based framework must therefore be principled rather than selective. If exclusions are perceived as politically motivated rather than analytically grounded, Europe will struggle to defend them legally, diplomatically, and normatively. Security Without Strategic Isolation? There is also a deeper tension at the heart of the proposal: the trade-off between security and openness. Europe’s digital economy does not exist in isolation. Innovation, resilience, and cybersecurity itself depend on global cooperation, shared standards, and cross-border supply chains. A policy that increasingly equates foreign origin with unacceptable risk risks drifting into strategic isolation — or at least strategic fragmentation. Excluding suppliers may reduce certain categories of risk, but it can also reduce competition, slow deployment, and lock Europe into a narrower technological trajectory. In sectors where Europe lacks strong domestic alternatives, exclusion without parallel investment becomes a defensive gesture rather than a strategic one. If digital sovereignty is the goal, it cannot be achieved through restriction alone. It requires sustained investment in European capabilities, research ecosystems, and market scale — none of which can be conjured through regulatory exclusion. The Legal and Normative Dimension There is also a normative dimension that Europe cannot afford to ignore. The EU has long positioned itself as a defender of rule-based governance, proportionality, and non-discrimination in digital policy. Supplier exclusion regimes that lack transparency or objective criteria invite legal challenge — not only from affected companies, but from trading partners and international institutions. If Europe wishes to set a global precedent for responsible infrastructure security, it must show that its decisions are evidence-based, proportionate, and legally robust. Otherwise, it risks legitimising similar measures elsewhere that are far less restrained — including by authoritarian states eager to justify protectionism or technological nationalism under the banner of “security.” A Necessary Policy — Incomplete as Strategy The Commission is right to act. Doing nothing is no longer an option. The idea that Europe can indefinitely rely on globally distributed, politically neutral supply chains is an illusion that the past decade has thoroughly dismantled. But securing critical infrastructure is not merely a question of exclusion. It is a question of managing dependency, financing transition, defining risk honestly, and preserving cooperation where it remains essential. Without addressing these dimensions, the policy risks becoming symbolically powerful but strategically thin. Europe does not need less openness; it needs structured, conditional openness grounded in realism rather than nostalgia. If this initiative is to succeed, it must evolve from a defensive posture into a coherent strategy — one that acknowledges power, cost, and consequence as inseparable from security. In the first part of this series, I looked at the DNA’s overall structure and direction, and flagged some of the broader concerns it raises. After spending more time with the text, however, additional issues start to emerge.
In this second instalment, I want to focus on two such issues: interconnection and the risk that the DNA could end up pulling Content Delivery Networks (CDNs) into telecom-style regulation. Once you start examining these questions, a third one inevitably follows: what all of this could mean for Internet Exchange Points (IXPs) — an area where Europe is, somewhat paradoxically, a global success story. These are not technical footnotes. They go to the heart of how the Internet functions, how the digital single market operates, and whether the EU’s regulatory choices genuinely support competitiveness, innovation, and growth. 1. Interconnection: the Internet’s Quiet Foundation Interconnection rarely features in political debates, yet it is one of the main reasons the Internet works at all. At its most basic level, interconnection is how independently operated networks exchange traffic with one another. It is what turns thousands of autonomous networks into a single, global Internet. Historically, interconnection has been governed by commercial agreements, not by regulators. Networks peer or purchase transit based on efficiency, performance, and cost. This decentralised, market-led model has allowed the Internet to scale, remain resilient, and support innovation without central coordination. For Europe, interconnection is fundamental. It enables cross-border digital services, lowers costs, supports competition between networks, and allows new entrants to reach users without needing permission from incumbents. In practical terms, it is one of the core enablers of the digital single market. 2. How the DNA Risks Re-regulating Interconnection While the DNA preserves commercial interconnection in formal terms, it materially lowers the threshold for regulatory involvement, creating a pathway — rather than a mandate — for the re-regulation of interconnection over time. The DNA does not explicitly announce a shift toward regulating interconnection. But regulatory change rarely comes that way. Instead, the proposal expands the role of national regulatory authorities in ensuring “end-to-end connectivity,” promoting “ecosystem cooperation,” and resolving disputes between actors involved in connectivity and traffic exchange. This language should ring a bell. It closely resembles arguments that large telecom operators have made for years to justify regulatory intervention in interconnection — including claims that upstream actors should be compelled to contribute financially to network costs. The risk is that interconnection gradually moves from being a commercial arrangement to a regulated relationship. Once regulators are empowered to intervene beyond narrowly defined cases of market failure, incentives shift:
From a competitiveness standpoint, this can be problematic. The Draghi report on European competitiveness makes clear that Europe already struggles with fragmentation, regulatory friction, and barriers to scale. Re-regulating interconnection would add friction where none is needed and undermines one of the Internet’s most successful governance models. 3. What This Means for Internet Exchange Points (IXPs) Any serious discussion of interconnection must also consider Internet Exchange Points. IXPs are one of the Internet’s quiet success stories, and Europe hosts two of the largest and most important in the world: DE-CIX in Frankfurt and AMS-IX in the Netherlands. IXPs are neutral infrastructures that allow networks to exchange traffic efficiently, locally, and at low cost. They reduce latency, improve resilience, and enhance competition. They are also a major reason why Europe has been such an attractive place for CDNs, cloud providers, and global networks to deploy infrastructure. Europe’s leadership in IXPs did not happen by accident. It is the product of decades of regulatory restraint and legal certainty. IXPs thrive because peering at their facilities is voluntary, decentralised, and commercially negotiated. Changing the interconnection regime — even indirectly — puts this model at risk. If interconnection becomes subject to routine regulatory intervention:
One of the striking omissions in the draft DNA is the lack of recognition of IXPs as strategic digital infrastructure. At a time when the EU speaks about resilience, sovereignty, and competitiveness, failing to protect the conditions that made DE-CIX and AMS-IX global hubs is a serious blind spot. 4. CDNs: Not Named, but Very Much in the Crosshairs A second issue that becomes apparent on closer reading of the DNA is the position of Content Delivery Networks. Formally, the proposal insists that it does not regulate content or cloud services. CDNs are not explicitly named as regulated entities. But regulatory scope is often shaped less by what is named and more by definitions, authorisation regimes, and dispute mechanisms. The DNA repeatedly emphasises the convergence of telecom networks, cloud, and edge computing into a broader “digital networks” ecosystem. It also reshapes the general authorisation framework and strengthens dispute resolution mechanisms between “undertakings” involved in connectivity and interconnection. This creates a real risk that CDNs — particularly those with infrastructure deployed inside Member States — could be treated as falling within telecom-style regulation. Not because the DNA clearly mandates it, but because it provides regulators with the conceptual and legal tools to argue that it should. 5. Why This Matters for Europe’s Digital Future Pulling interconnection and CDNs into telecom-style regulation would have predictable consequences:
This sits uneasily with Europe’s ambition to attract talent, lead in AI and cloud, and strengthen its strategic autonomy. An open, well-functioning Internet is not a side issue; it is a prerequisite for innovation, economic growth, and security. 6. What Needs to Change If the DNA is to support Europe’s digital ambitions rather than undermine them, several adjustments are essential:
Closing Note This piece is part of an ongoing series examining the draft Digital Networks Act as it moves through the EU legislative process. Given the scope and complexity of the proposal, no single article can capture all of its implications. Subsequent instalments will look more closely at the role of national regulators, the risk of market fragmentation, and whether the DNA genuinely aligns with Europe’s competitiveness and security objectives. The DNA is ambitious. But ambition alone is not enough. Interconnection, CDNs, and IXPs are not peripheral concerns — they are central to how the Internet works and to whether Europe can compete in the next phase of the digital economy. Getting these issues wrong would not just be a regulatory mistake; it would be a strategic one. Executive Summary
The leaked draft of the proposed Digital Networks Act (DNA) marks a significant shift in EU electronic communications policy. While it formally preserves core principles like net neutrality, competition, and consumer protection, it weakens their practical enforceability. By reframing interconnection, traffic management, and ecosystem relations as primarily commercial matters, the draft reduces the role of public law safeguards and regulatory oversight. The most profound change is on the way the DNA treats interconnection. Technically, the Internet’s scalability, neutrality, and resilience rely on an interconnection model based on settlement-free peering and competitively priced transit, where capacity expansions respond to predictable traffic growth rather than bilateral negotiations. Congestion is treated as a network failure to be remedied through timely upgrades of ports, backhaul, and routing—not as an economic imbalance. This model ensures that quality of service is determined by network design rather than commercial affiliation, preserves the end-to-end principle, and allows new services to reach users without prior negotiation. BEREC has repeatedly found that EU interconnection markets function effectively, with traffic asymmetries reflecting normal demand patterns, and that there is no evidence of systemic congestion or market failure warranting traffic-based contributions or regulatory restructuring. The draft DNA departs from this evidence-based framework by treating interconnection congestion as a commercial dispute resolved mainly through bilateral negotiation or alternative dispute resolution. This enables operators to delay or condition capacity upgrades to strengthen bargaining positions, effectively monetising quality of service. While nominally maintaining access-level net neutrality, the approach permits de facto discrimination at the interconnection layer—functionally equivalent to paid prioritisation but outside the safeguards of the EU’s Open Internet Regulation. Politically, the DNA shifts from a rights-based, competition-oriented governance model toward an infrastructure-centric regime that favors incumbent telco operators and codifies their narrative on traffic imbalance and investment incentives. This risks fragmenting the Digital Single Market, raising barriers for smaller and European service providers, weakening regulatory oversight, and contradicting BEREC’s warnings that monetising interconnection undermines openness, competition, and long-term sustainability. In summary, the draft DNA departs from empirical evidence and regulatory guidance, aligns with incumbent lobbying telco positions, and underestimates the tangible risks to the open Internet, competition, innovation, and consumer welfare. If adopted without substantial revision, it could distort the European Internet ecosystem, accelerate market concentration, weaken practical net neutrality, and threaten the Digital Single Market. 1. Open Internet and Net Neutrality: Formal Continuity, Substantive Regression 1.1 The DNA’s approach The DNA formally integrates the Open Internet Regulation (EU) 2015/2120 into a broader regulatory framework, explicitly stating that it does not alter the “fundamental principles” of equal and non-discriminatory treatment of traffic. However, the draft simultaneously:
1.2 Comparison with BEREC conclusions BEREC has consistently concluded that:
The DNA diverges sharply from these findings by implicitly treating traffic asymmetry and interconnection disputes as systemic problems requiring regulatory reframing. 1.3 Policy risk By relocating critical neutrality issues to the interconnection layer and framing them as commercial disputes, the DNA creates a credible pathway for paid prioritisation and de facto discrimination without formally breaching access-level neutrality rules. This undermines the effectiveness of net neutrality while preserving its appearance. 2. “Fair Share” and Network Contributions: De Facto Adoption Without Accountability 2.1 Embedded assumptions Although the DNA avoids explicit references to “fair share” or mandatory network contributions from content and application providers (CAPs), it embeds the underlying logic by:
2.2 BEREC’s position BEREC has repeatedly found that:
The DNA disregards these conclusions without presenting new empirical evidence. 2.3 Policy risk The result is a stealth policy shift that enables outcomes previously rejected by EU institutions, without the transparency, safeguards, or democratic scrutiny that an explicit “fair share” proposal would require. 3. Interconnection and Access: From Open Architecture to Negotiated Bottlenecks 3.1 Structural change The DNA reframes interconnection as:
This marks a departure from the long-standing European approach that treated interconnection as a foundational element of end-to-end connectivity and market openness. 3.2 BEREC comparison BEREC has emphasised that:
3.3. Policy Risk The draft DNA risks dismantling the invisible regulatory scaffolding that keeps interconnection competitive, replacing it with negotiated bottlenecks that entrench power, fragment the Single Market, and are extremely difficult to unwind. 4. Alternative Dispute Resolution (ADR): Privatising Regulatory Outcomes 4.1 The role of ADR in the DNA The draft DNA strongly promotes ADR for both consumer disputes and interconnection conflicts between undertakings. While ADR can be useful for individual disputes, its expanded role raises concerns when applied to systemic market issues. 4.2 Structural imbalance ADR mechanisms:
4.3 Policy Risk By substituting regulatory oversight with negotiated outcomes, the DNA weakens the ability of NRAs and BEREC to address structural competition and neutrality issues, effectively privatising regulatory enforcement. 5. Competition and Market Structure: Incumbent Bias and Consolidation 5.1 Implicit policy choices The DNA repeatedly prioritises:
Competition is treated as instrumental rather than as a core policy objective. 5.2 Impact on smaller operators and innovators Smaller ISPs, alternative networks, and new market entrants:
This outcome is inconsistent with EU competition policy and digital innovation goals. 5.3 Policy Risk The policy risk is that the DNA hardcodes an incumbent-centric market structure by subordinating competition to scale and “investment certainty,” accelerating consolidation, weakening smaller operators and innovators, and ultimately undermining the EU’s own competition and digital innovation objectives. 6. Consumer Protection: Indirect Harm, Limited Remedies 6.1 The missing consumer perspective While the DNA includes extensive transparency obligations, it fails to address:
6.2 BEREC alignment BEREC has consistently stressed that consumer harm in connectivity markets is often indirect and systemic, requiring proactive regulatory safeguards rather than reliance on transparency and switching alone. 6.3 Policy Risk The policy risk is that the DNA normalises a shift from citizen-centred network governance to infrastructure-centred bargaining, redefining connectivity as a private commercial outcome rather than a public interest service—thereby eroding the EU’s ability to detect, attribute, and correct systemic consumer harm before it becomes structurally embedded. 7. Institutional Balance and Regulatory Capture Concerns 7.1 Shift in governance The DNA:
7.2 Alignment with incumbent lobbying The narrative structure and policy assumptions of the DNA closely mirror positions advanced by large telecom operators and industry associations, particularly regarding traffic imbalance, investment incentives, and ecosystem “fairness.” This raises legitimate concerns about regulatory capture and the marginalisation of evidence-based regulatory conclusions. 7.3 Policy Risk The policy risk is that the DNA recalibrates EU digital governance away from independent, evidence-based regulation toward a negotiation-driven regime shaped by incumbent preferences, creating a perception—and eventual reality—of regulatory capture that weakens institutional credibility, exposes the EU to legal challenge, and undermines trust in the Union’s capacity to govern critical digital infrastructure in the public interest. 8. Conclusions and Policy Imperatives 8.1 Strategic assessment The draft Digital Networks Act cannot credibly be framed as a technical simplification or neutral regulatory adjustment. It constitutes a strategic redirection of EU digital policy, moving away from a rights-based, competition-driven model and towards an infrastructure-centric regime in which access, interconnection, and market outcomes are increasingly shaped by private negotiation rather than public rule-setting. This shift has implications that extend well beyond the telecommunications sector. By prioritising scale, bargaining power, and incumbent sustainability over contestability and openness, the DNA risks weakening the structural conditions that have historically enabled European digital innovation, cross-border growth, and service diversity. In effect, it trades a dynamic, entry-friendly ecosystem for a more static model optimised around large, vertically integrated actors. 8.2 Competitiveness, growth, and the open Internet Europe’s digital competitiveness has never depended on global platform dominance or network scale alone, but on open connectivity, low barriers to entry, and predictable regulatory safeguards that allow new services, applications, and business models to emerge. The DNA’s reorientation threatens these foundations in three ways:
Rather than strengthening Europe’s digital position, the DNA risks locking in low-growth equilibrium dynamics, where rents are redistributed within the connectivity layer while innovation and value creation migrate elsewhere. 8.3 Policy imperatives To safeguard long-term competitiveness, growth, and the integrity of the open Internet, the following corrections are essential:
8.4 Closing warning Absent these corrections, the DNA risks reshaping Europe’s Internet from an open, innovation-enabling infrastructure into a managed network of negotiated access, with adverse consequences for competition, growth, and digital sovereignty. Once embedded, such a shift would be structurally difficult to reverse, diminishing the EU’s capacity to sustain a healthy digital ecosystem and weakening its ability to compete in an increasingly innovation-driven global economy. Disclaimer: This post is part of an ongoing series analysing the draft Digital Networks Act (DNA). Given the size and complexity of the proposal, each instalment focuses on a specific set of issues that deserve closer scrutiny as the legislative process unfolds. |
Categories
All
|