One Ring to Rule Them All: The United Nations Secretary General lays out its vision for a UN-centric Internet governance.
A couple of months ago, I wrote an article about the Global Digital Compact (GDC), the United Nation’s new governance initiative for the Internet, and its goal. In that article, I did wonder whether the intention of the GDC is “to create a centralized system, where the UN sits at the top.” After reading the Secretary General’s new policy brief, I am convinced that this is the case.
The first thing I feel is worthy of mentioning is that, in the past twenty years of Internet governance discussions, it is the first time, at least to the best of my recollection, that a UN Secretary General produces such a detailed policy document for the governance of the Internet. (Former UN Secretary General, Kofi Annan, had shown interest in the Internet, but that was after his tenure as the head of the global intergovernmental body). So, this policy document, irrespective of where or how it will end up, is quite a big deal considering its source.
For its most part, the document hits the right tones and uses the right language – terms like multistakeholder, open Internet, human rights and collaboration appear throughout and make the case for a new process that will have these features at its core. And, if it were 2005 and Internet governance was still in a nascent stage, this policy brief would make a lot of sense.
The reality is, however, that after all these years of experience in Internet governance, this document offers nothing new to the Internet governance discourse. In the Internet governance microcosmos, the entire idea of “digital cooperation” upon which the entire GDC process is premised, is as old as the hills and has already been tested. After 2005, the agenda of the World Summit on Information Society (WSIS) called for enhanced cooperation to help governments reach consensus on international public policy issues; after a series of failed attempts and a rough ride in the UN system, the whole idea of enhanced cooperation was put to bed. Now, it seems to be back and rebranded as “digital cooperation”.
Just like the Secretary’s General’s “Common Agenda Report”, the policy brief identifies the same seven core issues that the UN should focus on, including connectivity, AI, human rights, Internet governance and data governance, amongst others. But the vision of the Secretary General comes full circle in the section of the document entitled, “Implementation, follow up and review”. There the proposal recommends the creation of “an annual Digital Cooperation Forum” tasked to support the implementation of the GDC. The Digital Cooperation Forum will effectively be a new process based on an old idea. And, what about the Internet Governance Forum (IGF) you may ask? It will continue to exist but, by the sound of it, it will have a supporting role. “Existing cooperation mechanisms, especially the Internet Governance Forum and WSIS […] would play an important role in supporting implementation, providing issue and sector knowledge, guidance and practical expertise to facilitate dialogue and action on agreed objectives”.
You can almost sense it that, should the SG’s proposal become a reality, the IGF will be displaced, an outcome that would be the result of a proposal that lacked the consultation of the wider Internet community. It would shift the IGF to a role to implement a vision that has not been set bottom up; a vision that appears more bespoke to government needs rather than the Internet’s or the multistakeholder model; a vision that will be “initiated and led by Member States with the full participation of all stakeholders”.
I’ve said in the past that the IGF is not perfect and it has its own problems and limitations. Still, it is a space that has gained the trust and legitimacy of a diverse set of stakeholders – governments, civil society, businesses, academia, international intergovernmental organizations, NGOs, the technical community – you name it; it is a process that already has the buy-in from the majority of the global Internet community. Why demote it then instead of trying to fix it?
The answer seems to lie in centralization. If I am reading this right, the role of the Digital Cooperation Forum will be to centralize different multistakeholder processes under its umbrella, ranging from the Christchurch Call to ICANN and the IETF (frankly, I really am looking forward to the day the UN attempts to ‘demand’ things from an organization like the IETF.) “The Digital Cooperation Forum would accommodate existing forums and initiatives in a hub and spoke arrangement and help identify gaps where multistakeholder action is required. Existing forums and initiatives, many of which are listed in Annex 1, would support the translation of Compact objectives into practical action, within their respective areas of expertise. The Digital Cooperation Forum would help promote communication and alignment among them and focus collaboration around the priority areas set out in the Compact. Internet governance objectives and actions, for example, would continue to be supported by the Internet Governance Forum (IGF) and relevant multistakeholder bodies (ICANN and IETF).“
Centralization and top-down processes are tricky when it comes to the Internet as they tend to slow things down and create unnecessary bureaucracies and chokepoints. Being decentralized itself, the Internet tends to respond better to looser and more agile governance arrangements that make room for fast-paced action. The UN system is the antithesis of this. It is slow, cumbersome and bureaucratic. It does not have a good track record of developing processes that have the flexibility to respond to the Internet’s evolving pace. Yet, the proposal really does insist on this solution. “[W]e have yet to put in place a global framework where states and non-state actors participate fully in shaping our shared digital space […]. The UN [..] is the only global entity that can convene and facilitate the collaboration needed”, the document reads.
That’s partly true though. Indeed, the UN is the only global entity that we have; but, convening and facilitating collaboration does not need to happen within the UN system necessarily. We have plenty of global multistakeholder initiatives that do not require UN oversight or intervention. The Christchurch Call is one such initiative supported by 58 governments and 14 online service providers, while civil society is participating in an advisory capacity; the network keeps growing. The Freedom Online Coalition (FOC) is another governmental initiative in which civil society and the private sector participate through an advisory network and working groups. Neither of these processes is, of course, panacea but the claim that we have not in place global frameworks is limiting in how we should understand Internet governance and global collaboration.
The more one reads the document, the more it becomes clear that the Secretary General is invested in trying to identify ways to address some core Internet-related issues, including Internet fragmentation, AI and content moderation. There are legitimate reasons for wanting to do so – governments are already producing regulation in most of these areas, the global Internet is indeed under the strain of fragmentation and AI is currently generating more questions than answers. The question though is whether the UN is ultimately the right place for these conversations?
The current state of the world does not seem to fit the Secretary General’s narrative about collaboration. In other words, the idea that governments will all gather and work in good faith for the better of the Internet is not realistic. How is it possible to believe that “human rights” can become “the foundation of an open, safe, secure digital future” when you have China and other countries with questionable human rights track record participating? How can the UN guarantee the application of human rights when in other processes it hosts human rights are, in fact, at risk? How can the UN assure multistakeholder participation will be upheld in a new process with new rules and new required resources?
As the UN ventures to create this whole new process, with New York being the center of operations, participation will become a crucial issue. Civil society, in particular, has historically struggled to participate in Internet governance fora – for the IGF it often has to do with geography; in the case of the GDC, it will be both geography and possibly new rules of procedure. Will the GDC and its proposal for a new Digital Cooperation Forum allow for the same levels of participation as the IGF does? Will this new forum rotate around the world, like the IGF, or will it be held exclusively in New York’s headquarters? If the latter, what about issues of visa? Let’s not forget that it was only a few years ago that people from certain countries were banned from entering the US. Is it a smart move to have Internet governance discussions centralized in New York? After so many years of experience, these questions should not exist; yet, here we are asking the same questions we were asking back in 2005.
In the end, the GDC feels top down, somewhat disconnected from what the Internet needs or how the Internet community operates. If I were reading the SG’s policy document without knowing anything about nothing that has transpired in the past twenty years, I would think that this is potentially the way forward. But a lot has happened in the past twenty years and, if there is one thing, we all have learned is that forced collaboration leads to tension and weakened structures.
The GDC may be an opportunity to address some things we have missed in the past twenty years; or, it may not. If we really want to find out, however, the conversation must have a different starting point, which is reaffirming, upholding and strengthening existing multistakeholder processes and rules of procedure; we should not be creating new ones.
The despair over “fair share”.
The other day, I came across a quote from Wiktoria Wójcik, the cofounder of InStreamly, a Polish startup: “I don't know what will happen. I feel like I have no power over it. I have no real say in this discussion. And the biggest issue is that it's so surprising because nobody's even talking about it”. She was referring to the "fair share" conversation in Europe and it made me cringe.
I have been following this debate in Europe since it started a year ago and my views have also been clear. But, the despair I read in Wiktoria’s statement hit me. No European citizen should feel this helpless. The European institutions were created to listen to the people of Europe; their entire existence is based on that very responsibility.
Yet, here we are. Wictoria is a member of a group of Polish startup companies; they are the last group to voice their concerns about the European Commission’s proposal on network fees. (The group is partly funded by Google and Microsoft according to their transparency register.) Who ever funds them, the fact is that Europe needs a robust startup ecosystem to become globally competitive and relevant; Europe should be fostering a predictable environment for startups, not one that causes despair. Regulation is necessary but regulation for the sake of regulation is unpredictable.
The European Commission’s apathy to such concerns has been nothing short of frustrating. This whole thing is nothing more than the result of effective lobbying. (In this moment, you will rightly point out that everybody lobbies; big tech does it; big telcos do it; everyone is doing it.) But, let’s make something clear about this specific situation: if there is one issue where lobbying can really screw the global Internet, this is it!
I fully appreciate that, when we hear such grandiose statements about the Internet, most of us think they are exaggerations. Frankly, most of them are; but, not this one. If you allow access networks even more power than they currently have, if you recognize the right of termination monopoly over the way content is delivered to users, if you unnecessarily intervene in the functioning arrangements on how networks interconnect, then you are effectively changing the topology of the Internet. You are essentially saying that there is a small number of networks that is more important than all the others networks. You are granting these networks the keys of the Internet; and, you tell them that from now on they get to decide on everything from innovation to user experience. This is not the Internet. This is the telecoms network reimagined.
The question then is what will the Commission do? Next Friday the public comment period ends and the comments will be in. The Commission has decided to outsource their analysis and synthesis to an external consultancy firm, but it remains unknown which firm has been awarded the tender. Soon though, we will find out where European stakeholders stand on this idea of a “fair share”.
Equally interesting will be the European Commission’s next move after the votes are in. If the consensus is that this whole policy is unnecessary, the Commission will have a hard time explaining its unedifying support to the EU big telcos. If the comments end up somehow being in favor of the proposal, the Commission will have the extremely hard task of managing the considerable resistance by European stakeholders. In both cases, because of the way the Commission treated this issue this past year, it has backed itself to a corner.
The same seems to be happening with the EU big telcos, given how half-baked their proposal is. For the past year, telcos have provided no information about things like how they envision this financial relationship to be established, for how long, under what criteria, accountability provisions, checks and balances, etc. The proposal was limited to just: we are only interested in direct payments. If things don't turn out the way they hope, it will require a lot of trust building with a wide range of stakeholders. This will not be easy.
The bottom line is that we are all losers no matter how things turn out. This whole process has created and fostered an antagonistic environment, while the Internet, and its infrastructure, are all about collaboration. Now that I think about it, despair might be more fitting to describe this whole process.
We have not seen this move before: fair share is not a remake, but it is the sequel!
The other day a friend of mine pointed me to a post written by the Director of EU Affairs for Telefonica. The post is entitled “You have not seen this movie before: fair share is not a remake”. The title is quite catchy, yet predictable and not surprising. Telefonica is one of the big telecommunication actors in Europe that are pushing hard for the fair share debate. Frankly, I was about to push this to the end of my “to-read” list pile, but the subtitle really caught my attention.
“The fair share contribution of large content platforms to network sustainability could be the catalyst for the transformation of telecoms operators in Europe from telcos to techcos towards achieving Europe's most critical goals.”
I find it very interesting that the transformation of the telecommunications industry towards a more technology-oriented, Internet-adapted sector is dependent on the funding that is produced by the technology companies. I also find it rather fascinating that after decades of trying to push telecommunication companies towards accepting that the Internet is here to stay, we are now in a place where telcos want to become “techcos”. (By the way “techcos” sounds to me like a snackable treat but, hey, I should not the one to pass judgement on inventing names – I am pretty terrible at it).
So, I started reading it. And, here’s what I realized: it is true that the fair share debate is not a remake; but it is definitely a sequel. And, just like any sequel, it is tired and not very innovative.
Before going into the details of the post, one point that I think deserves attention: for the past year, European telcos have been inside their own echo-chamber, reusing, rehashing and repeating the same arguments. In the post, any arguments presented are referenced back to arguments made by Telefonica. There is not an attempt to point to any outside sources that could justify any of the claims that are being made. Personally, this rubs me the wrong way, especially considering that the arguments about why the fair share is bad policy for Europe are coming from so many different sources. You can find them here and here and here and here and here and here and hereand here ….
So, let’s see what the article says:
“A range of stakeholders try indeed to portray fair share as a debate held and closed ten years ago in Dubai during the ITU WCIT Summit. Needless to bring it back because nothing in the fundamentals of how the Internet works has changed during the last decade and we need to preserve it this way to keep it as the generative engine for openness, innovation, and prosperity it has always been.
Good enough as an argument except for the reality that Internet development during the last decade has taken us nowhere near to 2012 with an incredibly unbalanced digital ecosystem. A few numbers of almighty tech giants recap most of the value, riding on unprecedented levels of concentration, self-preferencing practices and business models based on the massive collection of data from end users for profiling and advertising purposes.”
No matter how much telcos try to spin it, the fact is that the debate is fundamentally the same. Back in 2013, EU telcos suggested that “that the revised ITRs should acknowledge the challenges of the new Internet economy and the principles that fair compensation is received for carried traffic and operators’ revenues should not be disconnected from the investment needs caused by rapid Internet traffic growth.” Fair share and fair compensation are practically the same thing.
At the same time, I agree with the author about the concentration and consolidation that the market has been experiencing. There is no doubt that a few players are dominant in the services they provide and that user data is often being abused and exploited. What I disagree with is the idea that fair share contributions will fix this concentration and abuse of power (and I have not seen any convincing argument from telcos suggesting otherwise). The European Union has passed two landmark pieces of legislation to deal with these issues: the General Data Protection Regulation (GDPR) and the Digital Markets Act are both designed to deal with the abuse of user data and issues competition. For sure they are not perfect and they cannot address all the issues. But, they are getting there because there are people who are testing their limits and are willing to work with European institutions to make them better. So, how exactly would paying off big telcos help advance any of these two statutes? If the problem is concentration and abuse of power, why are telcos focusing on an irrelevant policy objective instead of working with other stakeholders to strengthen the existing statutes?
More importantly, as I have suggested elsewhere, a lot has changed in the Internet in the past decade but also nothing has changed. What has changed is the actors participating and an ecosystem that is growing increasingly more complex. This complexity has meant that more innovation and investment had to take place in order to ensure that the Internet continues to be resilient, affordable and efficient. Content Delivery Networks (CDNs), data centers and cloud infrastructure are only a few of the examples on innovation and investment that have taken place due to the changing nature of the Internet ecosystem.
What has not changed, however, is the underlying principles, norms and ethos of the Internet. As Vint Cerf has accurately written: “The Internet was designed to allow computers of all kinds to exchange traffic with each other over an unbounded collection of inter-connected networks. The direction of the traffic was irrelevant. The point was to allow traffic to flow in either direction so the network had to support both. Not all networks of the Internet are commercial services. Many are community based, non-profit or government operated. For those that are for-profit, their typical business model is to charge a flat fee for connection to the Internet regardless of the direction of traffic flow. The cost for implementing the service is a function of capacity, not the amount of traffic sent. The connection fee is commonly higher for higher speed capacity. There is no charge for the amount of traffic sent or received, only for the capacity, because the cost is the same whether the capacity is used or not.” This idea is relevant for the Internet today, it was applicable in 2013 and even before that.
“All while telecom operators in particular in Europe struggle against very dire financial and regulatory conditions in a decade long downward trend on revenues, return on capital expenditure and stock market value that threatens their future viability.”
This is not the first time I have heard telcos say that their financial situation is 'dire'. I am really not sure what telcos mean by 'dire' and how 'dire is actually dire'. There are others who may have more accurate data, but here's what Telefonica was stating in November 2022:
"Telefónica increases revenues across all markets and earns 706 million euros in the first quarter". And here are some other interesting data that point to billions of euros in revenue. 10 billion euros for only the third quarter of 2022 does not sound dire to me. As I have mentioned before, if the idea is to make EU telcos as big as big tech then this is a different conversation all together. But, in the end, it is just a bad policy objective.
To me though, he whole essence of the post is condensed in the next few sentences.
“Fair share comes anew also in a very different policy landscape. With a European Commission that is moving from naive into geopolitical including clear aspirations on strategic autonomy and recognizing the need to develop its own capabilities in the digital space, where the gap with the US and China is profound and growing.
In this context fair share appears as a fresh multipurpose tool that could help to address a number of the most critical policy goals of the EU.”
Frankly, I am pretty tired of this fearmongering and this ill-defined and abstract idea of digital sovereignty that is based on the idea that Europe can do it alone. This is hyperbole because, in fact, Europe cannot do it alone. No one can and, if this is the policy direction that Europe wants to take, then we all need to be prepared for a fragmented European Internet.
I never considered Europe as naïve and I am surprised to read that telcos did so up until now. For the past decades, Europe has been strategic in ensuring that non-European companies entered the single market and complied with it rules. It has managed to create the conditions for European users to get connected and, to do so, in an affordable manner. It has also paved the path for all the regulatory activity we have seen in the past few years. Calling Europe naïve negates all these efforts.
It is true that the geopolitical environment is very much different today. It is true that countries have the tendency to look inwards and to bounce back to protectionism as an easy solution. But, in the long term, none of this is going to ensure a globally competitive Europe; none of this is going to ensure a European Union where users enjoy affordable and reliable access to the Internet. The European Commission knows that; EU telcos know that.
One final point: I am surprised to see that none of the significant consumer protection issues are not addressed in this blog. They are not addressed as part of the European Commission’s recent questionnaire. And, it is disappointing. I am a European consumer; I am a European user. And, frankly, I start to get significantly concerned with what sort of an Internet experience I will be having in Europe should the fair share plan proceed.
After almost a year of speculation and discussions based, primarily, on rumours, the long-awaited questionnaire on the future of connectivity in Europe got leaked yesterday. Bloomberg reported it first, confirming that the controversial ‘fair share’ proposal had made it into the final draft. A leaked version of the questionnaire also made it into my inbox. Here’s a quick takeaway as I continue to absorb the minutiae of this whole initiative.
The European Commission’s Questionnaire – at least in its current form – is divided into four sections. (Until the European Commission officially launches its consultation, we should operate on the assumption that the questions and format might change).
Rather predictably, the European Commission insists on having a conversation about “fair share”. In the Questionnaire, the European Commission acknowledges that there is a great amount of division between stakeholders regarding the need for such a scheme, though admittedly it fails to reflect the difference in volume when it comes to these views (as the majority have come against any proposition for a fair scheme). To this end, instead of killing this proposal, the European Commission proceeds with presenting a problem that never actually existed.
Seeking to appease experts as well as civil society and other organisations that expressed concerns over the future of network neutrality protections in Europe, the questionnaire is fast to ensure that the open and neutral Internet will be preserved according to the Open Internet Access Regulation (EU) 2015/2120; it fails, however, to point out how this will be done. The Questionnaire is silent on the most probable scenario: should a “fair share” scheme go forward, how will it deal with a potentially unruly Content Application Provider (CAP)? What other means beyond blocking or degrading the CAP’s traffic would a telecommunication’s provider have at their disposal?
From the Questionnaire it becomes clear that issues of 5G development, the metaverse and cloud computing constitute core components towards ensuring a more competitive Europe in the years to come. The European Commission is right in putting emphasis on these technologies, especially considering the current geopolitical shifts and the way 5G, in particular, has become central to the way Europe has reordered and reorganised its relationships with international partners. To this end, the European Commission rightly underscores the interdependencies in its critical infrastructure and the potential vulnerabilities that these create for existing and future technologies. All this is quite fair and clear.
What is unclear, however, is how a ‘fair share’ contribution will alleviate any of these dependencies. On the contrary, one would think that it could further exacerbate them. Moreover, it is also rather confusing the way the Questionnaire seeks to address security concerns and the nexus it seeks to create between them and infrastructure. Is the European Commission suggesting that, without proper financial support, all these new systems would end up being not secure? Is the European Commission implying that the security of the European infrastructure is dependent on foreign investment? I would think not. The language, however, in the Questionnaire seems to suggest that Europe’s security will be – to a large extent – contingent on the availability of financial support. “In light of this, additional needs and increased cost for strengthening the cybersecurity, and the resilience and redundancy of networks might be triggered”.
Moreover, and in somewhat a disappointing way, the Questionnaire fails to reflect on the significant consumer protection issues that were previously raised by various bodies, especially the European Consumer Organization (BEUC). In its preliminary assessment back in September 2022, BEUC mentioned that “establishing measures” emanating from a fair share rationale, “would range from a potential distortion of competition on the telecom market, negatively impacting the diversity of products, prices and performance, to the potential impacts on net neutrality, which could undermine the open and free access to Internet as consumers know it today.” The European Commission should have been careful in pointing to these concerns and make them part of its public consultation.
Although there is a whole section dedicated to consumer fairness, this section provides more of an account of the successes Europe has achieved in creating an affordable environment for consumers over the years rather than an exploration of how consumers may be impacted - negatively or positively - from any change in the interconnection market. Unfortunately, the whole consumer section appears to be a build-up to a worrying warning - that European consumers should be prepared to bear some costs from a change in Europe’s consumer market. “The current economic conjecture, the rising inflation and cost of energy for the businesses, and some of the technological and market developments [...] are likely to lead upwards pressure on costs for consumers at least in the short term”.
It is true that Europe is suffering both from rising inflation and energy costs and that both could end up affecting consumers. Therefore, it really begs the question why the Commission would want to add more financial pressure on consumers by seeking to upend the whole interconnection market, through a “fair share” proposal. And, there is no question that this is what will happen given the experience from the South Korean market.
Finally, the most obvious question missing from the European Commission’s questionnaire, and one that many others have asked, is why it is so keen to intervene in the absence of a market failure. Is it the cost of regulation? Is it the strict merger policy in the EU telecoms area (for national mergers, not cross-border mergers)? Is it a lack of innovation-mindness? Or, is there an effective market failure problem? I don’t think that anyone would be against regulation that aims to address market failures, but we should question any attempt that seeks to create the conditions for such regulation to exist.
It’s easy to see how the European Commission has opted to address this highly controversial issue. Right at the start of the Questionnaire, it sets the tone of how one should proceed reading it. “The growing requirements for strategic autonomy, security and sovereignty regarding key enabling technologies in the electronic communications area will also have a significant impact on future developments”. The European Commission continues to be riding on the undefined, vague and potentially problematic for the global and open Internet notion of digital sovereignty.
For anyone following European digital policy, this language is not unfamiliar; after all, digital sovereignty has defined much of Europe’s digital agenda in recent years. The idea that Europe should be able to determine its own technological future, act autonomously and cut itself from foreign dependencies has been a key driver for most of its major legislative proposals; to an extent, this idea has also legitimised the European Commission’s hard stance on many issues, from content moderation to competition and Artificial Intelligence (AI). There was no reason to think that discussions on infrastructure and the future of connectivity would be any different. However, it should be noted that Europe’s digital sovereignty approach has also raised significant concerns about its contributing role on Internet fragmentation.
At first read, the Questionnaire seems messy and sounds more like an industrial policy document rather than a real attempt at understanding how investment and innovation should take place within Europe. It is disappointing to see the European Commission condensing so many different and important things in one questionnaire under such a politically-loaded environment. Discussion about the future of Europe’s infrastructure and connectivity are crucial for Europe and should take place; however, they must be detached from any attempt that seeks to pit actors against one another, especially when all of them are contributing in different ways to the Internet’s value chain.
But, it is about Network Neutrality!
To say that Europe is about to undertake its biggest bet on the open internet is an understatement. And, unfortunately, it will lose it.
For anyone living under some cave all this time, back in March, Breton hinted that the Commission was working on a proposal requiring technology companies to contribute financially to telcos’ infrastructure costs. Since then, a series of reports that have been prepared on behalf of EU telcos have suggested this “fair contribution” to take the form of the “Sending-Party-Network-Pays” model, which has persisted their lobbying efforts since 2012. It is based on a simple premise: hand back the termination monopoly power to European telco champions.
Reaction to this has been swift from across the board: civil society, European regulators, mobile virtual network operators, technology companies and more than 30 experts and academics have objected, stating that there is no clear policy objective to reopen this particular debate. Yet, the Commission persists and it appears determined to proceed with some proposal.
In their response to the letter sent by more than 50 MEPs questioning this very reason for reopening the network neutrality debate, Commissioners Vestager and Bretton responded that this is not their intention: “the Commission is strongly committed to protecting a neutral and open internet, where content, services, and applications are not unjustifiably blocked or degraded in Europe, as well as on the global stage”. The SPNP model cannot ensure this and the Commission can insist as much as it wants but the fact of the matter is that once you recognize a termination monopoly right to the largest telco providers in Europe, all bets are off. As usual, the Commission wants both ways, but it is impossible. It is as impossible as its position on CSAM and encryption: we strongly believe in encryption but we ask companies to break it when we need it. This is not how things work.
The letter goes on to clarify that “any interpretation of the Commission’s work suggesting that we might be in the process of reversing this fundamental principle is completely misguided”. The Commission though provides zero insight on how then this whole thing would work. The only model that is on the table is the SPNP and we know how this story ends. The model will seek to make the internet look like more like a telephone network, the internet will persist, further regulation will be required, the market will become anticompetitive and Europe will be left behind. All the while, European users and consumers will suffer.
There is a sentence in the letter that “the issue of the digital players’ contribution to network deployment does not need to approached […] from the perspective of network neutrality. Again, though, no information is provided. How is it possible to talk about changing the way interconnection works in the internet and not talk about network neutrality? How is it possible to consider allowing commercial transactions to replace years of peering arrangements in the internet and not talk about network neutrality? Pretty simple, you cannot.
I don’t believe anyone is against having a conversation about the way the infrastructure ecosystem can be enhanced and strengthened. This conversation though cannot start with regulation; it cannot start with a model that has already been rejected. The only way to have a rational conversation about any of this is to go back to the drawing board, define the problem, lay out the framework, conduct a cost-benefit analysis and then proceed.
What saddens me is that once again the Commission fails to have an informed discussion and be transparent about any of this. To someone who was involved in the same discussions back in 2012 and then in 2015, this whole thing does not make sense. Yet, it is where we are. A Commission committed to satisfy the wishes of handful of legacy telelcom operators at my, and your, expense. Why else are they planning to open the consultation on December 21st – four days before Christmas?
Here's the letter.
29 Internet Experts and Academics send a Letter to the Commission urging to abandon the “Sending-Party-Network-Pays” proposal.
Dear Commissioner Vestager,
Dear Commissioner Breton,
The undersigned experts write to express our concern and to urge the Commission to abandon its plans to require content providers to pay telecommunication providers an "infrastructure fee", often referred to as the "Sending-Party-Network- Pays" model. Such fees violate the net neutrality rules enshrined in the 2015 Open Internet Regulation and are explicitly prohibited by every strong net neutrality regime in the world. The current use of the transit and peering model allows for competitive markets. Adopting the “Sending-Party-Network-Pays” model will upend decades of European Union policy and harm Europe’s digital agenda rather than promoting its sound commitment to openness.
Proposals to charge content providers for access to broadband subscribers are not new and have consistently been rejected as harmful. In 2012, large European telecommunications operators tried to push a similar proposal at the International Telecommunications Union (ITU). This proposal was rejected by governments, experts, businesses, and civil society. Nothing has changed in the past decade that would warrant revisiting such a policy. However, large telecommunication providers have continued to lobby for a “Sending-Party-Network-Pays” proposal, instead of investing in innovation and new services.
The ideas behind this proposal represent a fundamental misunderstanding of the structure of the internet. First, just like prior proposals, this proposal is based on the mistaken assumption that content providers are causing traffic on broadband networks. Broadband users are requesting this traffic, and they already pay their broadband providers to deliver this traffic to them. Forcing content providers to pay broadband providers for delivering this traffic to their subscribers just results in broadband providers getting paid twice for the same service.
Second, the internet consists of more than the broadband networks that connect users to the rest of the internet. Universities, member-state governments, multi- nationals and even the European Commission all operate their own networks, independently of incumbent telecom operators. The desired rule change would break the competitive market for transit and peering. Indeed, every ISP in the EU could demand the European Commission to pay the ISP each time the regulatory department reads the EU Telecomcode.
While broadband networks are an important part of the internet’s value chain, so are content providers whose services drive Europeans’ demand for broadband access. Broadband providers receive substantial benefits at no charge from content providers’ efforts to create content that broadband subscribers want. Universities, public broadcasters and governments are content providers too. All these actors already invest heavily in internet infrastructure. They pay internet service providers to transport their traffic to broadband access networks and pay content delivery networks to store their content close to the end users; many content providers even perform these services themselves.
In addition, such access fees violate the Open Internet Regulation. In 2015, Europe granted end users the right to be “free to access and distribute information and content, use and provide applications and services of their choice.” The Regulation requires broadband providers to treat data in a non-discriminatory fashion, no matter what it contains, which application transmits the data, where it originates and where it ends. Charging some content providers for access to the network but not others violates the spirit and the letter of the Open Internet Regulation.
Lastly, charging access fees is unlikely to solve the broadband deployment problem. History and economic theory clearly show that similar fees do not increase investment in infrastructure from telecoms. In addition, there are bigger barriers to deployment than lack of funding such as permitting and construction capacity. The war in Ukraine showed that the country's decision to move from a highly centralised Internet to a decentralised one with interconnection points in 19 cities, made it much more resilient to DDOS, fiber cuts and bombing of datacenters. The proposal for "Sending-Party-Network-Pays" would make Europe more vulnerable to attacks.
We ask that the Commission not move forward with a proposal to drastically undermine Net Neutrality in Europe and the world. At this moment, Europe is experiencing a positive regulatory momentum and has become a global regulatory force. Europe should continue to lead by example, and adopting this proposal will have severe consequences both here and worldwide.
Dr. Konstantinos Komaitis, Internet policy expert
Dr. Luca Belli, Professor at FGV Law School, former Council of Europe Net Neutrality Expert
Dr. Niels ten Oever, Postdoctoral Researcher at University of Amsterdam
Dr. Francesca Musiani, Associate Research Professor at CNRS, France, Deputy Director, Centre for Internet and Society
Dr. Joan Barata, Cyber Policy Center, Stanford
Raghav Mendiratta, Future of Free Speech Project, Justitia and Columbia University, New York
Dr. Farzaneh Badiei, Digital Medusa
Dr. Tito Rendas, Executive Dean, Católica Global School of Law
Nikhil Pahwa, co-founder, SaveTheInternet.in
Thomas Lohninger, co-founder, SaveTheInternet.eu
Prateek Waghre, Policy Director, Internet Freedom Foundation
Dr. Yong Liu, Associate Research Fellow at Hebei Academy of Social Sciences
Prof. Maria Michalis, Deputy Director of the Communication and Media Research Institute (CAMRI), University of Westminster, London
Dr. Nuno Garcia, Chair of the Executive Committee of the Law Enforcement, Public Safety and National Security Laboratory - BSAFE Lab, University of Beira Interior, Covilhã, Portugal
Alec Muffett, Internet security consultant
Prof. Ross Anderson, Professor of Security Engineering, Cambridge University and Edinburgh University
Bogomil Shopov, civil initiative Electronic Frontier Bulgaria
Marvin Cheung, Co-Director, Center for Global Agenda (CGA) at Unbuilt Labs Jonathan Care, Board Advisor, Lionfish Tech Advisors
Karl Bode, Telecom analyst, writer, and editor
Christian de Larrinaga, Internet public service, Investor and founder.
Kyung Sin Park, Professor, Korea University Law School, Director, Open Net
Dr. Ian Brown, Visiting Professor at FGV Law School
Bogdan Manolea, Executive Director, ApTI, Romania
Linnar Viik, co-founder, e-Governance Academy, Tartu University, Estonia
Dr. Niels Ole Finnemann, professor emeritus, Department of Communications, University of Copenhagen. Denmark.
Bill Woodcock, Executive Director, Packet Clearing House
Moez Chakchouk, Director of Policy and Government Affairs, Packet Clearing House. Former ADG/CI UNESCO & former Minister in Tunisia
Desiree Miloshevic, Internet Governace Expert
Dr. Mikołaj Barczentewicz, Senior Lecturer in Law and Research Director of the Surrey Law and Technology Hub, University of Surrey
Prof. Wolfgang Kleinwächter, European Summer School on Internet Governance, former ICANN Board Member.
Thomas F. Ruddy, Retired Researcher, ETH Network
Jaromir Novak, regulatory expert & former chairman of the Council of the Czech Telecommunication Office
Note: if you are interested in signing on to the letter please email me at : konstantinos at komaitis dot org
The European Commission's response to the letter is attached.
On December 26, 1995, the Turkish cargo vessel “Figen Akat” ran aground on the easternmost point of the two islets of Imia, just seven kilometers off the coast of Bodrum, Turkey. When a Greek tugboat approached to help, the Turkish captain insisted that the tug was in his country’s territorial waters. After the disabled vessel made it back to its harbor, the Greek skipper put forward a salvage claim, which would signal the beginning of one of the most heated east Mediterranean crises in recent history. On December 27, 1995, the Turkish government declared the islets Turkish territory, something which Athens denied citing international law. In January 1996, two Greeks sailed off the nearby island of Kalymnos and raised the Greek flag on Imia. The Turks reciprocated. Neither country was backing down and military was now getting deployed on both sides across the Turkish-Greek borders. By the time a helicopter carrying three Greek soldiers crashed, the issue had become a NATO and European emergency.
At the time, there were rumors that we were at the brink of war. In remote places, like the island of Lesbos, things were particularly alarming. (Lesbos is the third largest island in Greece and its north coast between Skala Sikamineas and Molyvos has the shortest distance of 12 km to Turkey). People were panic-buying food and soon supermarket shelves were empty; others were fleeing the island (my older sister was studying in Thessaloniki at the time, and she called in tears asking us to go there). The island was going into frequent blackouts, apparently in order not to be visible to the opposite Turkish side and as the army was taking key positions across the island.
Growing up in an island like Lesbos there is one thing you learn and that is how to live with a bad neighbor. You know the kind of neighbor that makes too much noise, does not like to work with you on almost anything, he tries to trick you and, most importantly, does not respect you. That’s Turkey for people in Lesbos and, before you all rush to accuse me of being biased, let me just say: of course, I am biased. I am Greek and we are taught early on to be biased. But, this is not about the country or the Turkish people or even about history. It is about a continuous political aggression and the constant challenge of having to defend your own identity. It is frankly exhausting.
Greece’s relationship with Turkey goes through waves. Over the years, the two countries have majorly learned to tolerate one another and, overall, be courteous. In times of natural disasters humanity also prevails. When the big earthquake struck Turkey in 1999, Greece sent immediately helicopters and firefighters to help. (I remember distinctly that earthquake; it was August and, it was so strong that we also felt it in Lesbos). But, the truth is that when it comes to politics, Greece and Turkey cannot see eye to eye and most likely they never will.
During Greece’s financial crisis, a lot of its foreign policy was put in the backseat. In reflection, I am sure everyone would agree it was a mistake because it allowed Turkey to elevate itself as a more reliable partner in the region while making Greece look weak. The timing of the refugee crisis and Turkey’s role created a web of dependencies on Turkey and gave President Erdogan political leverage and an excuse to often act like a sultan (in fact, he has maintained much of this behavior to this day). At the time, the European Union, NATO and, even the US, were paying more attention to Turkey than to Greece.
In the last couple of years, however, Greece has become stronger. The new government has made foreign policy a priority and has managed to place the country at the center of geopolitical relevance. Greece has managed to strike strategic geopolitical deals with Egypt, Libya and Israel, it has reached out to the Middle East and has successfully convinced its international partners that it can ensure the stability in the east Mediterranean. In 2020, for instance, Greece signed a deal with Cyprus to build the deepest and longest undersea pipeline that would carry gas from new offshore deposits in the southeastern Mediterranean (Israel and Egypt) to the continental Europe. The EU and NATO can no longer afford to have a messy east Mediterranean.
The stability in the East Mediterranean is a grave geopolitical issue. Greece is claiming back its rights. (I am not saying that – international law does. More on that below). Turkey, on the other hand, is suffering from one of the worst economic landslides in its history with reports that, in 2021 alone, the Turkish lira lost its value by 44%. The Turkish people are experiencing unprecedented inflation rates and foreign investment has stopped. At some point, the situation with the Turkish lira was so bad that companies, like Apple, stopped selling their products. (Apple has since restored its sales unit in Turkey). As people were getting angrier, Erdogan did what any sultan does – he looked for an enemy with the hope that it could unite the country against the enemy instead of him. In Turkey’s case, you just had to look down the road and there was your enemy.
Part of President Erdogan’s strategy has been to contest the sovereignty of Greece in certain islands in the Aegean Sea. His message sounds unsurprisingly revisionist: unless Greece demilitarizes certain islands, then this creates a question of sovereignty (this goes all the way back to Turkey’s wrong interpretation of the 1923 Treaty of Lausanne and the 1947 Treaty of Paris). The response from the Greek government was strong.
Again, none of this is new. But, it is still exhausting and annoying.
You must be wondering where I am going with all this. I am trying to establish context of why the recent registration of the trademark “TurkAegean” has hit a nerve in Greece and, me personally
A quick background on the trademark.
Turkey’s 2022 tourism campaign focuses on its west coast, boasting places like Izmir, Ephesus and Ayvalik, all with rich history and deep historical references for Greece. According to the campaign, the “Aegean Region of Türkiye offers you beautiful landscapes, dazzling coastlines, immaculate beaches, pine woods and olive groves; perfect for nature lovers, photographers, history buffs and adrenaline junkies. Many popular holiday villages and fishing harbors are scattered up and down the coast”. While the United States rejected the application on the procedural grounds, 84 other countries, including the European Union, have allowed the registration of “TurkAegean” to proceed.
By the time the registration became public, the Greek government was caught off guard; it committed to taking action, before having to apologize. . In Brussels, Greek representatives and pubic servants have also been vocal. Eva Kaili, Vice President of the European Parliament, questioned the legality of the trademark under EU and international law “as it is deliberately misleading, presenting the Aegean Sea, islands and coasts as Turkish”. Similarly, European Commission, Vice President, Margaritis Schinas, sent a letter to internal markets Commissioner Thierry Breton, requesting a review of the decision to approve Turkey’s application.
The fact, however, is that under current trademark law and practice, it will be difficult to challenge the registration. One could advertise “TurkAegean” services relating to Izmir or Ephesus for instance and there is nothing prohibiting this. The only question that concerns trademark law here is whether the mark has become distinctive of that party's goods in commerce. Unlike Aegean, the term “TurkAegean” is not a geographical term, needless to say descriptive. In a similar vein, Greece could, for instance, apply to register “Cretalibyan” and, most probably, it would get it. But, it does not because it doesn’t want to and there is also this little thing called being a good neighbor.
Here's the catch, however; the registration of “TurkAegean”, in fact, boxes Turkey out of insisting that the term is generic for a sea. Because of its trademark registration, Turkey can never claim that the term is a geographical indication for the Aegean Sea; there can never be a “TurkAegean” sea. The trademark prevents that. Essentially, any geopolitical claims Ankara may have hoped to achieve by doing this will not succeed. Moreover, this trademark may inadvertently become an asset for Greece as it makes its own claims for establishing the breadth of its territorial sea to 12 nm. Unlike Turkey, Greece is a signatory of the United Nations Convention on the Law of the Sea (UNCLOS), which establishes the rule. (The rule is also established through international customary law). Lately, Greece has been reclaiming its 12nm and, if Turkey intended to use the argument of “TurkAegean”, it would not stand.
It is important for Greece more aggressive in the future. For one thing, it should become more proactive in protecting the names that are tied with its history and language, particularly when they end up misrepresenting Greece.
As I am sipping my coffee, I type in my browser the name “aegeansea.com”. After a long wait I get an error message. I conduct a WHOIS search and I see that the name is registered (thankfully does not appear to be a Turkish authority), the Registrar is an entity in China (that makes me a bit nervous) and it expires in October 2023 (this makes me hopeful). I can only hope now the Greek government notices.
Yes, it is personal!
I would like to thank the European Internet Forum for putting this event together and for inviting me to share my thoughts on the need for digital rights and principles.
During her opening remarks at 2021 Digital Assembly – “Leading the Digital Decade” – President Von der Layen said, among others: “We believe in a human-centered digital transformation”.
Indeed, there is something profoundly human about the internet. The ability of independent networks to set their own rules while adhering to minimum standards that ensure interoperation is parallel to the democratic attributes of human autonomy, expression and participation.
This basic architectural design, however, is often taken for granted. Current regulatory attempts from around the world, including in Europe, often seem to assume that the internet can’t break – that it just works – out of magic.
The same way democratic societies have rules, so does the internet. And, this becomes particularly important as new ‘internet models’ emerge that seek to displace, replace or generally undermine the internet. It is for this reason that the timing of the “Declaration on Digital Rights and Principles for the Digital Decade” is important.
It is a challenging time for the internet. This technology, which has transformative qualities, is currently being used as a weapon to threaten democracies and undermine users’ rights.
For the past few years, Europe has been a global leader insisting on the need for regulation and clear rules that will allow the exercise of users’ rights while at the same time encourage innovation and economic growth. Striking this balance is not easy, given that a lot of the business models currently in place are designed around “surveillance capitalism” and they are the outcome of strong network effects and economies of scale.
In this regard, the effort made by the European Commission and Europe’s willingness to commit to digital rights and principles is certainly commendable. The Declaration touches on almost all major points that are in need for immediate attention: it places people at the center of digital transformation; it highlights the need for connectivity and digital skills; and, it recognizes the need for users to be able to exercise their human rights within a secure internet environment.
None of these aspirational goals can be fulfilled, however, without an open, decentralized and global internet. You see, there are various ways of networking – China’s way of networking is one of them. But, the internet way is rather unique.
The internet is premised on a set of fundamental properties – engineers awkwardly call them invariants because they remain constant over time. The internet can evolve but these properties stay the same. This is what makes the internet so powerful. No matter how it changes, its properties remain unchanged. But, this internet is not a given nor can it withstand coordinated attacks.
The internet is not a monolith nor should be treated as one. Its constant evolution, growth and innovation is a direct consequence of its original design. The internet’s properties are a mix of aspirational goals and pragmatic design choices and are significant because without them none of the goals listed in the Declaration can be materialized.
Let’s run through these properties briefly:
These properties, aside from facilitating inter-networking, they also facilitate users’ freedom of choice, user participation and empowerment as well as a fair and secure environment. They ensured a resilient digital environment during the COVID pandemic. Unlike humanity, the internet was ready for this pandemic.
The flipside of not having these properties is an internet that is based on topdown control where innovation is directed by a central authority, which is also responsible for determining what users can and cannot do. This is the China model and currently, at the international level, there is an ongoing debate as to which model will prevail.
Because of this, Europe has a unique opportunity. As it sets the rules for the road, it must pay tribute to these foundational properties.
Europe’s vision for the internet is one based on democratic ideals. An internet that departs from its original architectural principles cannot be a democratic one. So, my recommendation today would be to have these design principles enshrined in the declaration document and any other piece of primary or secondary legislation that emerges in the future.
It is simply the only way to ensure that people are at the center of the internet.
The UK’s approach on encryption is a SOPA-like moment, worthy of a SOPA-like fight!
A child and an adult are both inside a glass box. The adult is starting deliberately at the child as the glass fades to black.
Rather discomforting, right?
This is the image the UK government wants to be engraved into the minds of every British citizen. According to an article published by the Rolling Stone magazine, the government plans to launch a campaign with the aim to “mobilize public opinion against Facebook’s decision to encrypt its Messenger app”. And, to make sure it captures the public’s opinion, the government is deploying tactics of sensationalism, exaggeration and emotion.
This scaremongering is not a first for the UK government, which has been fighting against encryption for quite some time. Back in 2015, former Prime Minister David Cameron, pledged to ban online messaging applications that offer end-to-encryption. Although the ban never happened, since then, the UK has been on a steady course to ensure the government and law enforcement have access to encrypted communications. And, the Online Safety Bill, a legislation aiming to address content moderation practices and platform responsibility, appears to be encryption’s death sentence.
The UK government’s argument is that encryption is an obstacle to ensuring the safety of children. Law enforcement agencies and charities have rallied behind this argument, suggesting that encryption hides millions of reports of child abuse and that end-to-end encrypted communication services are the main vessels for child grooming, child exploitation, sex trafficking and other child-related crimes. These claims create awe and one would hope that they are supported by strong, hard evidence; unfortunately, this is not the case. Although we all suspect that at some level encrypted communication services are used for illegal acts, but so is every technology. It is the nature of humans to use any technology for good and bad. And, the worrying part of the UK’s approach is that it appears disinterested or unwilling to take into consideration the good things that encryption brings.
Privacy, security, trust – these are things that encryption achieves effortlessly. Encryption allows an LGBTQI kid to communicate without fear of bullying or, in many cases, persecution; it ensures that a domestic abuser is able to reach out for support and seek help; it helps activists and whistleblowers to come forward with information that holds governments and businesses accountable, which is the basis of every democracy; it helps all of us to exercise our freedoms and rights without the fear of someone snooping around. Of course, the safety of children is very important, but so is the safety of everyone else. And, without evidence to support that the ‘problem’ will be fixed by just banning encryption, is it really worth it?
The UK government insists it is and for this purpose it has hired M&C Saatchi, involved in a controversial campaign during Brexit, to carry out its campaign. According to reports, the campaign will cost UK tax payers half a million pounds and, according to Jim Killock, Executive Director at the Open Rights Group, it is a “distraction tactic” seeking to manipulate the British public opinion.
However, the truth about encryption is unequivocal: it is a necessary foundation for the internet and for societies. In many ways, encryption is the technical foundation for the trust on the Internet – it promotes freedom of expression, privacy, commerce, user trust, while helping to protect data from bad actors. Regulating encryption in order to hinder criminals communicating confidentially runs the significant risk of making it impossible for law-abiding citizens to protect their data. The main objective of security is to foster confidence in the internet and ensure its economic growth.
In this regard, it becomes a valid question what such a ban will mean for the UK’s economic growth and innovation. In Australia, the Telecommunications and Other Legislation Assistance and Access Act (TOLA) 2018, which mandated tech companies to break encrypted traffic so law enforcement could get a peek at the online communication of Australians, is said to have resulted “in significant economic harm for the Australian economy and produce negative spillovers that will amplify that harm globally”.” There is really no reason why things will be different for the UK.
It is important to note that, any technique used to mandate a communications’ provider to undermine encryption or provide false trust arrangements introduces a systemic weakness, which then becomes difficult to rectify. The fact of the matter is that once trust in the Internet is broken, it becomes very difficult to restore it. And, trust is what ultimately drives the use, consumption and creativity in the Internet.
Ten years ago this week, the United States Congress proposed the Stop Online Piracy Act (SOPA) which drew a great deal of criticism about its impact on the internet and human rights. Stakeholders from around the world, civil society, the technical community, businesses and academia, all joined forces against a piece of legislation that would undermine our trust to the internet and our ability to express ourselves without fear. What the UK proposes to do with encryption is no different; it will change the relationship users have with the internet for the worse.
The UK may want to keep children safe, but by banning encryption, it will end up making everyone, including children, unsafe and vulnerable, while turning the UK into one of the most inhospitable internet and innovation hubs in the world. It is time for all of us to rejoice and fight for what is right: our right to a secure and private internet experience.
This week, the Internet and its governance experienced a significant accountability failure. This failure had two faces: the first one concerned the Australian government’s choice to put pressure on tech companies to strike a deal with traditional publishers, led by NewsCorp. The second one was Facebook’s drastic move to block news’ content, essentially preventing the ability of its users to see and share news on its platform.
Let’s take things from the beginning.
Earlier this year, the Australian government drew legislation “to level the playing field” on profits between technology platforms and traditional publishers. The Mandatory Code of Conduct would compel technology platforms with significant market power to enter in negotiations with registered publishers that earn over $150,000 AUS a year and are in compliance with the Australian news and media codes of ethics. On Wednesday, the law passed the lower house of the parliament and, with cross-party support, it is expected to pass the Senate next week.
Google complied with the law. Facebook decided not to.
This is not a first. Last year, the French government, in a similar move, instructed Google to negotiate in “good faith” with publishers, which resulted in Google negotiating a reported $76 million payout deal. The deal was the outcome of France’s attempt to enforce article 11, often referred to as the “link tax”, of Europe’s controversial copyright directive.
There are a lot of issues with the “link tax”. Public Knowledge’s Harold Feld provides a very good account of what these are, here. Other sources also have pointed to the concerns over the uncertainty of its effectiveness.
But, here, I want to talk a bit about the Internet. Not a lot of people talk about what all this is doing to the Internet.
In 1997, Tim Berners-Lee wrote:
“Normal hypertext links do not of themselves imply that the document linked to is part of, is endorsed by, or endorses, or has related ownership or distribution terms as the document linked from. However, embedding material by reference (sometimes called an embedding form of hypertext link) causes the embedded material to become a part of the embedding document.”
So, the hypertext link is nothing more than a reference.
Further down, Berners-Lee continues:
“Meaning in contentSo the existence of the link itself does not carry meaning. Of course the contents of the linking document can carry meaning, and often does. So, if one writes "See Fred's web pages (link) which are way cool" that is clearly some kind of endorsement. If one writes "We go into this in more detail on our sales brochure (link)" there is an implication of common authorship. If one writes "Fred's message (link) was written out of malice and is a downright lie" one is denigrating (possibly libellously) the linked document. So the content of hypertext documents carry meaning often about the linked document, and one should be responsible about this. In fact, clarifying the relative status of the linked document is often helpful to the reader.”
The reason everything is wrong with the “link tax” is stated above. It treats the hyperlink as something that carries a specific meaning. When hyperlinks are shared through platforms and search engines, they neither “endorse” nor “claim authorship”.
The “existence of the link itself does not carry meaning”.
This changes fundamentally the meaning and scope of hyperlinks. It ascribes to them a meaning they are not meant to have.
This debate not just about Facebook or the Australian government or publishers. This is about the web.
And, the implications of the “link tax” go further down the architecture stack.
Here’s what Internet Society’s (and my boss) Andrew Sullivan said about the “link tax” in 2018:
“The link tax, as described, well, to the extent it's ever been described, is actually impossible. It's technically impossible because nobody will ever deploy it. The only way that it will ever get deployed is if everybody agreed that I really want to be taxed.”
That does not seem plausible.
“And since it's unlikely that people are going to sign up to be taxed, then they're never going to click on a link that causes them to be taxed. All of the clients will simply not implement the necessary technical requirements to, you know, to cause the tax to take effect. And there are only two possibilities at that point. Either the government authority can say, well, you're not allowed to use a board compatible mechanism in which case the ‑‑ backward compatible mechanism in which case the internet is over. Or, you are allowed to use a backward compatible linking mechanism in which case only the backward compatible mechanism is going to be deployed by clients. Those are the only two possibilities and I've yet to have anybody explain to me how we get out of that. On the internet if you actually want people to deploy things you have to give them a reason to deploy it. “
So, this whole debate is not only about Facebook or governmental policy. It is about the Internet’s goal to be trustworthy, open, secure and global. Conditions like integrity, reliability, availability, accessibility matter for the Internet. They should matter in any policy. And, in the case of Australia, they evidently don’t.
The Australia and France cases are not going to kill the Internet. They are deep cuts to its architecture, but they are not fatal. If the Internet dies, it will die of a thousand cuts. The BBC has reported that Scott Morrison, Australia’s Prime Minister, has already spoken with India’s Narendra Modi and is seeking international support. Here’s your thousand cuts – the possibility of the ‘link tax’ exploding as an international policy.
So, to sum up: Australia’s Code is a terrible policy. Facebook’s decision was out of self-interest. The Internet is the casualty.