|
Europe is undergoing a visible shift in mood. What was once an abstract debate about “strategic autonomy” has turned into a series of concrete political decisions, particularly in the technology sector. Concerns about Europe’s dependence on U.S.-based platforms such as Google, Meta, X, and Microsoft have long been present, but they are now translating into action. France plans to ban public officials from using American videoconferencing tools, replacing them with Visio, a service hosted on infrastructure provided by a French company. German lawmakers are debating alternatives to U.S. data analytics software, while members of the European Parliament are urging a broader move away from American software and hardware. Even in traditionally pro-Atlantic countries such as the Netherlands, political pressure is growing to shield sensitive digital infrastructure from foreign control. These developments reflect a growing recognition that digital technologies are no longer neutral utilities but strategic assets. As EU tech commissioner Henna Virkkunen recently noted, Europe has realised that dependence on “one country or one company” for critical technologies creates structural vulnerabilities. The impulse to reduce exposure—to de-risk and, where necessary, decouple from foreign technologies—is therefore understandable and, in some cases, justified. Yet Europe is repeating a familiar error. Rather than undertaking the difficult work of clarifying what digital sovereignty truly entails for its strategic goals, it is relying on piecemeal, defensive fixes. Platforms are replaced, procurement rules tightened, and data flows restricted—all without a coherent framework connecting these moves to a broader vision of power, resilience, and global influence. The outcome is a patchwork that may appear assertive on the surface but is strategically hollow. This weakness is compounded by a second, more subtle error: the assumption that digital sovereignty should primarily be about “building a stronger European tech ecosystem.” While this mantra is politically appealing, it is also myopic. Framed in isolation, it risks turning digital sovereignty into a project of technological self-containment—one that prioritises European origin over interoperability, and autonomy over collaboration. Digital sovereignty should not mean building technology only for Europe. That approach risks siloing the continent, alienating partners, and reducing Europe’s relevance in shaping global digital systems. Europe may be bruised by recent developments in transatlantic relations, particularly the growing unpredictability of its longest-standing and most trusted partner. But disappointment is not a strategy. Retreating into a purely inward-facing digital ecosystem would be a strategic overcorrection. History offers a clear warning. In the 1970s and 1980s, Europe repeatedly attempted to achieve technological sovereignty by nurturing national or regional champions in computing and telecommunications. These initiatives were driven by legitimate concerns: dependence on U.S. firms, fear of strategic vulnerability, and a desire to retain industrial value within Europe. Yet many of these projects ultimately failed—not because Europe lacked engineering talent or political ambition, but because autonomy was pursued at the expense of interoperability, openness, and alignment with global standards. One of the clearest examples is Europe’s fragmented approach to computing in the 1970s. France’s Plan Calcul, launched in 1966, sought to create a sovereign national computer industry capable of competing with IBM. Substantial public investment went into companies such as Bull, with the explicit aim of insulating France from U.S. technological dominance. While technically sophisticated, these systems were largely incompatible with the rapidly emerging global software and hardware ecosystems dominated by U.S. firms. As computing shifted toward standardised architectures and software portability, European systems struggled to adapt. IBM’s open, scalable ecosystem—combined with its ability to set de facto global standards—ultimately prevailed. Europe did not secure sovereignty; it entrenched dependence. A similar pattern emerged in telecommunications switching systems. Throughout the 1970s and 1980s, European countries developed proprietary digital switching technologies—such as France’s E10, Germany’s EWSD, and the UK’s System X—often with heavy state backing and limited concern for cross-border compatibility. These systems worked well domestically, but their lack of interoperability hindered integration and export. While Europe eventually succeeded with GSM—a rare case where coordination, openness, and standard-setting aligned—the earlier fragmentation delayed progress and weakened Europe’s position just as global telecommunications markets were taking shape. The videotex experience offers another cautionary tale. France’s Minitel is often remembered as a technological success, and in many ways it was: widely adopted, user-friendly, and years ahead of its time. But Minitel was also a closed, nationally specific system, built around proprietary standards and tightly controlled by the state. When the open, interoperable architecture of the Internet emerged, Minitel could not adapt. Its success at home became a liability abroad. France lost the opportunity to shape the global digital information ecosystem because it had optimised for domestic control rather than international scalability. Perhaps the most consequential example is Europe’s early approach to networking standards. In the 1980s, European governments and institutions strongly backed the OSI (Open Systems Interconnection) model, viewing it as a sovereign, standards-based alternative to the U.S.-developed TCP/IP protocol suite. OSI was theoretically elegant and institutionally endorsed, but it was slow to implement and disconnected from real-world deployment. TCP/IP, by contrast, spread through academic and commercial collaboration, evolving through use rather than central planning. By the time Europe acknowledged TCP/IP’s dominance, the global Internet’s architecture—and governance—had already been shaped elsewhere. Europe became a rule-taker in a system it might once have led. The common thread across these cases is not failure, but misalignment. European policymakers equated sovereignty with control, and control with closed systems. In doing so, they underestimated the strategic power of openness, interoperability, and early standard-setting. Meanwhile, more collaborative ecosystems—particularly in the United States—allowed markets, developers, and users to coalesce around shared technologies that scaled globally. Those systems became the infrastructure of the digital age, embedding the political, economic, and governance assumptions of their creators. The lesson for today is stark. Digital sovereignty built around exclusion, substitution, or inward-facing industrial policy may deliver short-term reassurance, but it risks long-term irrelevance. In a networked world, influence flows to those who design the systems others rely on. Europe’s historical experience shows that sovereignty without interoperability leads not to independence, but to dependency—only delayed and often more costly. The lesson is not that Europe should avoid building capacity, but that capacity without openness leads to marginalisation. In a deeply interconnected digital world, sovereignty cannot be achieved through isolation. It must be built through the ability to choose dependencies, shape standards, and collaborate from a position of strength. This is where Europe’s current approach falls short. By focusing narrowly on replacing foreign tools with European ones, policymakers risk conflating sovereignty with substitution. Replacing Microsoft Teams with a European videoconferencing platform may address immediate political or legal concerns, but it does little to answer the larger question: how does Europe intend to design digital systems that are resilient, interoperable, and influential beyond its borders? The security implications of this gap are significant. Digital infrastructure underpins critical state functions, from public administration and energy networks to defence logistics and intelligence cooperation. If each member state defines “trusted” or “sovereign” technology differently, Europe risks fragmenting its own digital foundations. Interoperability suffers, cross-border services weaken, and coordinated responses to cyber incidents or hybrid threats become harder to execute. Cloud infrastructure illustrates this danger clearly. A proliferation of national or “sovereign” cloud initiatives, absent a common European and international framework, may reduce dependence on specific foreign providers but at the cost of scale, efficiency, and resilience. In crisis scenarios, fragmented systems could slow information sharing and complicate collective defence. Sovereignty that undermines coordination ultimately weakens security. The same logic applies to artificial intelligence. Regulation can set guardrails, but it cannot substitute for strategic leadership. If Europe focuses primarily on constraining AI within its borders while failing to shape global AI architectures, standards, and governance models, it risks becoming a regulatory island. Influence flows not from restriction alone, but from offering systems that others can adopt, trust, and integrate. This is where Europe must rethink its approach. Digital sovereignty should be about de-risking critical dependencies while actively building an ecosystem that is open by design, interoperable by default, and attractive to partners. Europe does not need a technology stack that works only for Europeans. It needs one that allows Europe to be self-sufficient where it matters, while leading in collaboration where it counts. The answer to the current strain in transatlantic relations is not alienation. Nor is it technological autarky. If Europe still believes in cooperation and in shaping global norms rather than retreating from them, then its digital strategy must reflect that belief. That means investing in shared infrastructure, open standards, and governance models that enable cooperation with like-minded countries—and even structured engagement with those that are not. Until Europe confronts this challenge directly, it will continue to manage digital sovereignty through defensive, piecemeal decisions. It will swap platforms without setting strategy, build walls where bridges are needed, and mistake insulation for influence. Teams may be out, but unless Europe moves beyond substitution and isolation, digital sovereignty will remain more slogan than strategy. Comments are closed.
|
Categories
All
|