It is pretty simple: if we want an open Internet, we need to celebrate and support its decentralization.
When I was young, my friends and I, had formed a secret group called “The Five Hounds”, inspired by Enid Blyton’s book series The Secret Seven. The group’s main purpose was to identify and solve school mysteries – who had written bad words on the board, where the attendance sheet had gone or who had scratched the Math teacher’s car. During school hours, we would gather evidence and then meet after school to exchange notes. Our meeting places included abandoned buildings, buildings under construction, the playground. Often times we would exchange notes even during class, which would get us into much trouble. We were all consumed by our mission. We were all equally committed.
Of course, we would end up solving none of the mysteries we had taken on. Actually, we were pretty awful at finding clues. But, what was cool about all this was the group and how it worked. The group had no assigned leader and had no hierarchical structure. We were all responsible for our own actions and we all had the same liberty to make the choices we made. We operated on the basis of openness and were driven by what was good for the group. Most importantly, we were all accountable to each other for any success or failure. And, trust me, there were a lot of failures.
This was my first encounter with the idea of decentralization. (Of course, back then, I could not even spell the word). This idea of the dispensation of power and responsibility and the ability for everyone to be their own self within the collective. This feeling that we were all equals and there was no one that we had to ask permission from. For me and my friends, decentralization meant the willingness of the group to be open to new ideas, suggestions and direction.
Decentralization has been applied to different disciplines from political science to group dynamics and project management. Its meaning varies. But, whatever meaning one attaches to it, it should not be one that views decentralization as a free-ride system of anarchy or lack of responsibility.
This is the major flaw in Niall Ferguson’s essay “In Praise of Hierarchy” for the Wall Street Journal. In his piece, he questions the idea of decentralization as a future course for the Internet, blaming it for its current challenges – fake news, propaganda, online extremism or the rise of big tech. He points the finger to the Internet’s decentralized nature, asking whether “we perhaps overestimate what can be achieved by ungoverned networks—and underestimate the perils of a world without any legitimate hierarchical structure”. But, he completely misses the point of what decentralization means in the context of the Internet.
Let’s start, by what it doesn’t mean. It does not mean the lack of governance. And, it certainly does not mean that those who are responsible should not be held accountable. Internet decentralization is not meant to protect the powerful at the expense of the weaker. It is not about creating permanent favorites.
On the contrary, it is about the invariants of the Internet – the characteristics that make the Internet what it is. It is ultimately about openness. It is about interoperability, accessibility, collaboration and global reach. All these characteristics exist because of decentralization and they are very-well elucidated by Leslie Daigle in her piece “On the Nature of the Internet”. They are important because, as Daigle argues, “by understanding the Internet — primarily in terms of its key properties for success, which have been unchanged since its inception — policy makers will be empowered to make thoughtful choices in response to the pressures outlined here, as well as new matters arising.”
So, decentralization enables the existence of these unique features, which, in turn, ensure an open and free Internet. In this context, decentralization is not an end in and of itself – it is a means towards reaching that end. And, that end, is openness. It is pretty simple: if we want an open Internet, we need to celebrate and support its decentralization.
For the Internet, hierarchy will not work. Imagine asking for permission every time someone has an idea that wants to connect to the network? Imagine governments – the principals of hierarchy – being the only ones making the based purely on their national interests. Or, imagine a system that only supports the strong and creates permanent favorites without accountability. All these things are things hierarchy feeds from.
This does not mean that we should give decentralization absolution. Because, its success is only ensured if there is accountability and transparency. When we talk about anyone being able to operate within a decentralized structure, it should not mean that anyone can do what they want without being accountable or transparent. We are all equally responsible for the decisions we take and for the course of our actions. We should make sure that no one uses the unique features of the Internet only for their benefit and not for the benefit of all, including for the Internet itself. When we feel that some lack accountability we should call them out. We should certainly get better at that. But, whatever we do, we should never prefer hierarchs to visionaries.
I don’t know about you, but I am angry. I am angry with the state of the world and our incapacity to do something about it. I am angrier because, in all this, I thought that the Internet would be the place where we would see collective action at its best. But, that’s not going to happen. At least, any time soon.
Is it time to admit that the Internet has turned toxic?
No. But, it is time to ask ourselves the question: is the Internet today the one we subscribed to originally? (This place of openness, freedom, innovation and creativity – a value proposition of democracy)
When I heard earlier this month that, during the US Presidential elections, as many as 126 million people were lied to, misinformed and subjected to propaganda, I got angry. Is this the Internet I want to be part of? Of course not! And, I am pretty sure it’s not the Internet that these 126 million people want to be part of. But, I also realized that we are equally responsible for the current state of how we understand the Internet.
Let’s start with the fact that we chose convenience over human values. Over the past years, our ability to debate and, even hold an opinion, seems to be getting out of our hands. And, it is getting progressively worse. Social media platforms and Internet intermediaries make decisions daily on our behalf on what we should agree or disagree with. The proliferation of propaganda has chipped away our right to question the facts. Companies, with focus unrelated to content infrastructure, remove controversial domain name sites and they prevent us from engaging in an intellectual argument about extremism. In a glimpse of a second, private companies can silence the conversation. And, we accept this.
There is nothing new or shocking about this. According to Jürgen Habermas, the role social media platforms can play in democracies can be less than conspicuous. Notwithstanding their ability to destabilize authoritarian regimes, they can also wear away the public sphere of democracies. And, there are examples of social media platforms undemocratically silencing different conversations.
Although the idea of private companies determining issues like speech had always many concerned, the user sentiment was that they represented basic democratic values. It was, after all, Facebook that was celebrated for its contribution during the Arab Spring; it was Twitter that stood up to Turkey’s pressures on censorship a few years ago; and, it was Google that ceased censoring its Chinese search engine, costing its exile from China.
All these acts were applauded for how these private actors represented the liberal ideals of democracy; how they advocated for everyone to express themselves and be part of this global conversation. It was fascinating. And, for many years, our faith on them seemed to be having great results. We felt safe that these companies were protecting and were standing up for our beliefs. We idolized them and, because of that, we also became complacent and we stopped paying attention.
It is not that big Internet companies lied to us or suddenly stopped supporting liberal ideals. In the end, profit took precedent. Speech got into the second priority lane. Information became diluted.
In such cases normally, the government would intervene to ensure that fundamental rights are appropriately and freely applied. But, just like us, governments are also guilty of becoming complacent. For many different issues and for years, governments have been outsourcing a lot of the decisions to the private sector. And, that is not good enough. That is fundamentally not good enough.
So, for many years, private Internet companies were loose. They grew both in size and in the services they offered to users. And, as they did that, they became more powerful and more untouchable.
But, then something changed. A lot of things changed actually. The world was exposed to a seismic geopolitical shift, where old power structures either collapsed or got refocused. The news cycle was too fast to shift from Brexit, to the US Presidential elections, the rise of far-right groups across the world, international security and secessionist trends. In all these trends, the Internet was front and center. It may not have caused any of them but it had become the place where each and every one of them would get exaggerated. Suddenly, our heroes were the villains.
Are then governments the good guys now?
Yes and no. We should all be very concerned with what has happened last year in the United States. Governments should also be very concerned and they seem willing to tackle this. But, so far, they appear to go for patches instead of a real fix. And, they go alone. Germany recently passed legislation that demands the removal of hate speech within 24 hours. France, Italy and Germany are eager to have social media platforms remove objectionable content within 2 hours of it appearing. And, in the US, questions over whether social media ads should be regulated like TV commercials have started emerging. In the meantime, private Internet companies are looking for ways to respond to this regulatory pressure either by hiring an increased amount of staff to monitor violent content or by collaborating to find ways to fight extremism.
Neither of these approaches can work. Government regulation will not fix the problem – it will exacerbate it. Regulation will stop innovation and will create an environment where private Internet companies will over-censor in their attempt to avoid hefty fines or bad reputation. On the other hand, continuing to allow social media platforms to create their own rules uninhibited only gives them more power to ‘behave’ as independent state-actors.
What’s the way forward? Transparency is our only currency. We should go back to the whole idea of figuring out how to hold private Internet companies accountable. Governments should join in this effort not through regulation but through a more loose normative structure.
In the wake of the financial crisis, governments started demanding greater corporate accountability. This led to the Extractive Industries Transparency Initiative (EITI), a global standard for good governance of oil, gas and mineral sources, run by a multistakeholder Board of governments, extractive companies, civil society, financial institutions and international organizations.
Are we experiencing a democracy crisis analogous to the financial one? Most probably not yet. But, we are headed there the Internet is accelerating this process. Our only currency is transparency.
G20 Avoids Encryption; Talks Rule of law; and, Extends a Hand of Collaboration to Internet Technology Firms
This past weekend, the leaders of the G20 group concluded their discussions in Hamburg in what it has been declared "a solid success". This success, however, relates more to the fact that the G20 managed to publish a joint G20 communique rather than to its substance. Although there was some strong indication on the need to move forward on key security issues of countering terrorism, the threat posed by North Korea's nuclear program and commitments on rising economic growth, financial regulation and architecture, tax cooperation and skills literacy, it was the lack of credible action on climate change, migration and liberalization of trade that will mark this year's summit.
But, what about the Internet provisions?
The Internet provisions do not appear to go as far as one would have expected and the G20 group has, instead, opted for some vague and opaque language, indicating that there is more work to be done between now and next year's G20 meeting in Argentina.
The G20 communique recognizes "that information and communication technology (ICT) plays a crucial role in modernizing and increasing efficiency in public administration." It talks about the need to "bridge the digital divide" and to "ensure that all our citizens are digitally connected by 2025" and "promote digital literacy and digital skills". In line with the G7 action plan, the G20 leaders committed "to foster favourable conditions for the development of the digital economy and [...] to promote effective cooperation of all stakeholders and encourage the development and use of market- and industry-led international standards for digitised production, products and services that are based on the principles of openness, transparency and consensus and standards should not act as barriers to trade, competition or innovation."
Interestingly, there was also commitment to support the work of the WTO on e-commerce, which will be interesting to observe considering the strong objections of this work coming from the African group within the WTO.
As expected, terrorism occupied much of the G20 discussions. It is only on that statement that we see the term "Internet' being used: "We will work with the private sector, in particular communication service providers and administrators of relevant applications, to fight exploitation of the internet and social media for terrorist purposes such as propaganda, funding and planning of terrorists acts, inciting terrorism, radicalizing and recruiting to commit acts of terrorism, while fully respecting human rights". This is not a surprise.
There is an increase governmental awareness and concern about the use of the Internet and social platforms to radicalize and recruit terrorist. France and the United Kingdom have already been on the record as wishing to create legal requirements for technology companies to aid the fight against terrorism online. Germany is also toying with the idea of imposing hefty fines on social networking sites failing to remove hate speech.
Despite some early indications coming from individual countries about the need to address the challenge of encryption, both the G20 communique and the Statement on Terrorism, avoid use of the word. This is good as the debate can continue to be addressed through more inclusive frameworks with the collaboration of all interested parties -- technologists, security experts, governments, law enforcement and the users. However, there is some language that could be interpreted loosely regarding "lawful and non-arbitrary access to available information".
The Rule of Law
The G20 Hamburg Statement on Countering Terrorism ends with a strong message: " We affirm that the rule of law applies online as well as offline". This means two things. The obvious first one is that existing traditional regulation will apply online. This could be problematic. If the past twenty-five years are of any indication, applying regulation that was created before the Internet to address issues as they emerge on the Internet can be problematic and counterproductive. It can affect innovation, economic growth and scale back the Internet' s development.
The second thing is that the judiciary will have a stronger role to play. Courts will need to interpret how the rule of law can apply both offline and online. This is good. If there is still one institutional arrangement that is independent enough to resist the political forces of populism, protectionism or uphold human rights and civil liberties is the judiciary. Courts will have a bigger role to play and, thus, their potential impact on the future growth of the Internet will be greater.
All in all, the G20 communique is a solid declaration towards a more collaborative and sustainable approach to address terrorism and the digital economy. It does not include anything that should automatically ring any alarm bells, but it should be seen by everyone as an invitation to collaborate towards finding solutions to some of the Internet's 'wicked problems'.
The Internet is full of ‘wicked problems’ and the latest cyberattack – the WannaCry ransomware – is no different. WannaCry has so far infected more than 200,000 computers in 150 countries, with the software demanding ransom payments in Bitcoin in 28 languages.
In a sense, WannaCry could be characterized as a ‘wicked problem’: so far, we have incomplete knowledge about the exact parameters of the attack and there is great uncertainty regarding the number of people actually involved. At the same time, Microsoft and the institutions hit by the attack have all suffered a large economic burden, while the interconnected nature of the attack has created significant problems and has had a spill over effect on the operation of various services, including, most notably, hospitals, transportation and telecommunication providers. Notwithstanding these features though, this specific cyberattack is not your typical ‘wicked problem’. Although it will be hard to fully solve, we will still be able to solve it in the end. This makes it less of a “wicked problem” and more of just another security problem. What is important though is what this attack tells us: the real “wicked problems” of the Internet is currently security (in a more general sense).
Discussions about Internet security have consistently been rampant though more lately. Most such discussions, however, focus more on the policy agendas of nation states than the concept of Internet security itself. Often, this takes the form of giving high priority and equating security with issues like human rights, economics, social injustice and the threat of using the Internet to carry out military threats. Such thinking is usually buttressed with a combination of normative arguments about which values of which people should be protected, and empirical arguments as to the nature and magnitude of threats to those values.
In our effort to understand – and, thus, attempt to resolve – security questions in the Internet, we face a significant limitation: we may know we have an overall security problem but we continue to fail to fully understand its scope, parameters and dimensions. And, with no definitive problem, getting a definitive solution becomes somewhat an impossible task.
To this end, in order to understand the Internet security conundrum, we need to understand the complexity in approaching ‘wicked problems’. Academic Tim Curtis says that a “wicked problem” is one in which “the various stakeholders can barely agree on what the definition of the problem should be, let alone what the solution is”. Sounds familiar? In addressing security questions on the Internet, the different stakeholders are normally in full disagreement of the exact problem: governments tend to approach security as a national policy issue; businesses see it as purely economic; for the technical community it is usually a question about the reliability and resiliency of the network; and, civil society sees the whole issue under human rights considerations.
This inability for agreement between affected and interested parties feeds into the mistaken perception that the security issue is somewhat broken. And, this creates the danger of security as a “wicked problems” to exist in perpetuity. As Curtis accurately argues: “Problems are intrinsically wicked or messy, and it is very dangerous for them to be treated as if they were ‘tame’ or ‘benign’. Real world problems have no definitive formulation; no point at which it is definitely solved; solutions are not true or false; there is no test for a solution; every solution contributes to a further social problem; there are no well-defined set of solutions; wicked problems are unique; they are symptomatic of other problems; they do not have simple causes; and have numerous possible explanations which in turn frame different policy responses; and, in particular, the social enterprise is not allowed to fail in their attempts to solve wicked problems.”
Perhaps you are beginning to see what I mean. The crux is what is preventing us from finding solutions to the security challenges – whether they relate to ransomware attacks, attacks on national security or attacks directly against the network.
So, we need to find a middle ground that will allow solutions for such ‘wicked problems’ to emerge. And, this middle ground is collaboration.
Lately, I have been thinking quite a lot about collaboration – the value it adds, the importance it carries and its ability to solve ‘wicked problems’. Along with Leslie Daigle and Phil Roberts, we have deliberated how collaboration can contribute towards providing a robust framework where solutions can emerge and answers can be found. So, we came up with the following features that can make collaboration work.
Whether this understanding of collaboration can solve all security problems, I do not know. What I know though is that it is a pretty good starting point. In fact, it is the only starting point. Governments need to disclose system security vulnerabilities as they discover them, businesses and the technical community must race to address them and users must demand that this is the case. This will only happen though if different actors talk to each other.
Note: Extracts taken from: Tim Curtis's essay The challenge and risks of innovation in social enterprises in Robert Gunn and Christopher Durkin's book Social Entrepreneurship: A skills approach.
Photo: Flickr, "Opportunity Knocks" by The Shifted Librarian
On Friday, September 30, 2016 - at the stroke of midnight - the IANA functions contract between the US government and ICANN ended quietly. This marked the end of an era, full of political struggles concerning the role of the US government over the Domain Name System (DNS). It also marked the beginning of a new one, full of opportunities and hope.
The termination of oversight over the IANA functions is more symbolic than anything. On October 1st, the US government did not turn off any Internet switch nor did it pass the keys of the Internet to ICANN. The US government was never holding such power to begin with. In a nutshell, what took place with the decision of the NTIA to allow the IANA contract to expire was the validation of the multistakeholder model of Internet governance.
Historically, the multistakeholder model – the collaborative approach to dealing with Internet (policy and technical) issues – has not had an easy ride. Its efficiency and legitimacy to provide tangible and implementable policy recommendations has been questioned and it has often been characterized as an unworkable approach. Ever since it emerged during the second phase of the World Summit on Information Society (WSIS) in Tunis in 2005, multi stakeholder governance has been criticized for its lack of focus and for failing to identify the roles and responsibilities of its participating actors, especially governments. But, multistakeholderism has persisted and it seems that it now has scored its first big win.
But, let’s be fair, multistakeholderism is a very awkward term. It is an invention that not everyone can easily relate to; it is so open-ended that it has been taken to mean a bunch of different things to a bunch of different people and institutions. But, if we forget the term for a minute, this invention is what has allowed non-state actors to be active participants in discussions that directly affect them. It has allowed collaboration to be front and center in preserving the Internet, addressing its challenges and finally being able to ensure its constant growth. Multistakeholderism is about collaboration and this is how we should view it.
It is this collaborative model that has allowed us to address the various challenges over the past years, including the IANA transition. And, it is this collaboration that must carry us in the future. The successful transition of the IANA functions provides us with a solid framework to do so.
So, where should we see its impact?
The immediate impact should be seen in the debate over "enhanced cooperation", which was originally part of a compromise on the future of the Internet at the WSIS in 2005. At the time, agreement could not be reached over the governance of critical Internet resources, including the DNS. Enhanced cooperation was seen as the focal point where stakeholders would collaborate towards a more participatory governance structure. And, for years now, the Commission on Science and Technology for Development (CSTD) has been trapped in a never-ending argument that consistently seemed to circle back to the unresolved issue of the US government’s role over the DNS. With IANA out of the way, this argument is no longer persistent. This provides the CSTD and its participating actors with a unique opportunity to advance their thinking. The CSTD working group has the opportunity to find a new identity and make a whole new contribution to Internet governance. Rejuvenating the discussions at the CSTD level could help rejuvenate the discussions also in other fora, UN and non-UN.
More fundamentally, however, the impact of the IANA experience should be be visible in the years to come and as we seek to find solutions to complex questions. One of the key takeaways of this this two-year plus process is the tools that we now have at our disposal -- tools that were always there but right now should be visible to all of us.
These are some of the obvious impressions the IANA transition process has to offer; of course, there are more. In moving forward, we should celebrate this important milestone the community has reached. But, we should also cease this opportunity to continue building a governance framework that allows people to come together and collaborate for a more inclusive Internet.
Has Netflix made its first mistake yet by betraying its growing network of loyal customers?
In a statement, titled “Evolving Proxy Detection as a Global Service”, Netflix Vice President David Fullagar said that he wants to prevent its subscribers from using Internet proxies or hide behind Virtual Private Networks (VPNs) to access content outside their own countries.
“[…] in coming weeks, those using proxies and unblockers will only be able to access the service in the country where they currently are. We are confident this change won’t impact members not using proxies.”
We can only make assumptions as to why Netflix has decided to do this. What is certain though is that, for many, this move will definitely change the perception they have on Netflix. A perception of a new business model that not only has depended on the Internet as a new business model, but has also respected its ability to offer users a variety of options.
We need to acknowledge of course that Netflix has not had an easy ride. It is somewhat a common secret that Netflix has faced a big challenge to license or keep licensed content for its users. As the company grew, so did the costs associated with licensing content.
In 2011, analysts estimated that Netflix spent $700 million for content licensing, which was expected to go up to $1.2 billion in 2012. This is a lot of money. No, wonder Netflix has consistently over the past few years tried to rely less and less on the content of others and has focused on creating its own original one. Shows like the House of Cards or Orange is the New Black have been hugely successful for the streaming giant and have even been part of the awards glitz and glam, but are still not enough to keep Netflix in the content competitive race.
And, in August of last year, Netflix’s deal with Epix – the distributor of Hollywood blockbuster like the “Hunger Games” and Transformers ended, which automatically made the company’s share tumble for a little while.
So, the hard truth is that Netflix needs Hollywood more than Hollywood needs Netflix. But, what Netflix needs even more than Hollywood is the trust of its customers. And, their latest move seems to be turning its customers away from the company in a time when Netflix needs them most. As the company prepares for a huge expansion in more than 130 countries, one cannot help but wonder whether the streaming giant will appeal to users in countries with a confined catalogue.
As soon as news that Netflix would ban users from using VPNs, the Twitter-sphere want on fire. Here is a selection:
In the Internet, trust is as important as water is to human life. Unless users are able to trust the provider, the services and their delivery, then they look for something else. In this sense, trustworthiness and trustability walk hand-in hand. In a Harvard Business Law Review article examining “Trust in the Age of Transparency”, the authors make it clear that “[for a company to succeed] beyond trustworthiness, you must achieve “trustability.” It’s a more proactive stance that has you not just keeping up your end of a bargain but ensuring that the bargain is the best one from the customers’ point of view […]”. Not allowing users innovative ways, like using Internet proxies, to access your content is certainly not what is best for Netflix’s customer base.
Netflix appears as if it is turning its back to its users. But, to what end? Netflix could easily use the fact that users opt for VPN to demonstrate that the current licensing regime is simply not working. It could use this data to demonstrate that in a global place like the Internet, we need to work towards identifying a more efficient, cost-effective and reliable way to license content. Instead, it opted for the ‘easy’ solution.
We will just have to wait and see whether in the end Netflix will lose customers. But, notwithstanding this, Neflix will never be for its users what it was when it started.
We live in a world of information abundance and the proliferation of ideas. Through mobile devices, tablets, laptops and computers we can access and create any sort of data in a ubiquitous way. But, it was not always like that. Before the Internet information was limited and was travelling slow. Our ancestors depended on channels of information that were often subjected to various policy and regulatory restrictions.
The Internet changed all that. Suddenly everything became possible; everyone had the same opportunities to become a creator or publisher of ideas or a distributor of information. The Internet connected people and their ideas; it has contributed to social empowerment and economic growth. But, have we ever stood long enough to consider what makes the Internet such a special invention? What is it that makes the Internet such a multifaceted tool that never ceases to amaze us with its potential?
In a nutshell - it is “permissionless innovation”.
Vint Cerf, one of the fathers of the Internet who originally coined the term, has argued that permissionless innovation is responsible for the economic benefits of the Internet. Based on open standards, the Internet gives the opportunity to entrepreneurs, creators and inventors to try new business models without asking permission.
But, permissionless innovation is even more than that. It is about experimentation and exploration of the limits of human imagination. It is about allowing people to think, to create, to build, to construct, to structure and to assemble any idea, thought or theory and turn it into the new Google, Facebook, Spotify or Netflix. As Adam Thierer says in his book “Permissionless Innovation: The Continuing Case for Technological Freedom”:
“Permissionless innovation is about the creativity of the human mind to run wild in its inherent curiosity and inventiveness. In a word, permissionless innovation is about freedom.”
Freedom, not anarchy. Leslie Diagle eloquently places the freedom aspect of permissionless innovation into context in her blog post: “Permissionless innovation: openness, not anarchy”:
Permissionless innovation" is not about fomenting disruption outside the bounds of appropriate behaviour; "permissionless" is a guideline for fostering innovation by removing barriers to entry.
This makes permissionless innovation is an inseparable part of the Internet. Standards organizations that have witnessed the benefits of permissionless innovation refer to it as “the ability of others to create new things on top of the existing communications structures. Ultimately, all entities are working toward the same goal and developments by one party can aid in the creation of another.”
Of course, all this freedom should not be seen isolated from our societal structures. It is freedom that operates within certain boundaries of behavior that trial test the manifestations of permissionless innovation. These boundaries can be normative or the rule of law. But, they kick in after the creator has created and after the inventor has invented in a permission-free environment. And, so they should. Permissionless innovation is not a sign of disorder; it indicates structured order.
Imagine a world without Facebook or YouTube. Imagine a world where your creations can be subject to authorization by third parties. Imagine a world where the end result of your product is only part of what you have imagined.
Imagine a world where ultimately certain uses of the Internet are prohibited. But, not so long ago, use of the Internet was prohibited. The 1982 MIT handbook for the use of ARPAnet – the precursor of the Internet, instructed the students:
“ It is considered illegal to use the ARPAnet for anything which is not in direct support of government business… Sending electronic mail over the ARPAnet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend people, and it is possible to get MIT in serious trouble with the government agencies which manage the ARPAnet.”
Now, instead of “government business” imagine “ANY business”.
Permissionless innovation is key to the Internet’s continued development. We should preserve it and not question it.
The role of the US government in the administration of the Domain Name System (DNS) dates as far back as in 1998. At the time, various academics, myself included, questioned the role the United States government, through its National Telecommunications and Information Administration (NTIA), was exercising on the DNS. Questions regarding ICANN’s independence dominated much of the discussions also at WSIS and, in particular, at the first Internet Governance Forum (IGF). It was all everyone was talking about.
As Internet governance discussions evolved and matured, so did these questions. The Internet community was still asking but, at the same time, it was engaging in more thorough and considered discussions regarding the critical role the United States was exercising. That role was consistent, predictable and made the Internet run smoothly. The Internet was secure, stable and resilient. It was being established that, especially absent any viable alternative at the time, the US government was doing a very good job. But, questions persisted.
That is until 14 March 2014, when the US government, reacting to years of discussion, released a statement, declaring its intention “to transition key Internet domain name functions to the global multistakeholder community”. And, with this sentence, the United States government demonstrated to the entire world that, given the right process and the right ingredients, it is willing to terminate its stewardship role over this part of the Internet’s infrastructure.
For many this was long overdue. Under the 1997 Green Paper proposal on the “Improvement of Technical Management of Internet Names and Addresses”, “the US Government would gradually transfer [specific DNS functions] to [ICANN …], with the goal of having [ICANN] carry out operational responsibility by October 1998. Under the Green Paper proposal, the US government would continue to participate in policy oversight until such time as [ICANN] was established and stable, phasing out as soon as possible, but in no event later than September 2000”. Evidently, this deadline was never met. The 1998 MoU between the US government and ICANN was extended and then further extended and then re-extended (under a different name – the Joint Project Agreement or JPA), until it took the form of the Affirmation of Commitments in 2009.
Assistant Secretary of Commerce for Communications and Information Lawrence E. Strickling, stated that “the timing is right to start the transition process”, explaining the ‘why now” question and inviting the Internet community to find a workable framework for the evolution of the IANA functions. As parties engage in this process, the US government is requesting that a set of conditions are met. These conditions illustrate a vision of an Internet that continues to thrive as a tool for innovation and creativity, is secure and as inclusive as possible. They should not be underestimated or taken for granted.
The first concerns multistakeholderism. By entrusting the multistakeholder model to create a functioning process for that would replace its stewardship role, the US government is demonstrating its integrity and belief to multistakeholder governance. This is quite remarkable. The various stakeholders have now a window of opportunity to engage in substantive discussions and find ways to work alongside through a bottom-up, inclusive, transparent and accountable process.
Caution and patience is required; failure of the stakeholders to demonstrate their capacity to build and adhere to a robust process can provide the ammunition to the critics of the model and jeopardize years of discussions. Its spillover effect will be impactful and could extent to the Internet itself. As multistakeholderism becomes more ingrained into the operational aspects of the Internet, it now becomes even more significant to get this right.
Naturally, this multistakeholder model will have to evolve and continue to demonstrate its flexibility; it will need to involve everyone and especially those materially-affected parties that are directly related to the various aspects of the Internet’s addressing and naming system.
Materially-affected parties become key actors in this process because of their ability to maintain the security, stability and resiliency of the Internet. Their experience in the day-to-day management of protocol parameters, IP numbers and names should be utilized to the full possible extent. They understand and they know what a secure and resilient Internet looks like. They are the trusted parties that have ensured a stable Internet for all these years; they have been majorly accountable. They, too, will have to go through their own re-evaluation and re-thinking; but, they are able to provide answers to emerging questions. As we heard in Singapore, what stable, secure and resilient mean will have to be rethought and clearly explained. What are the tools? Are there pre-existing mechanisms that ensure this? Are there specific principles that need to be applied?
Reaching a common and shared understanding to such questions becomes critical. The first step is for them to be addressed in any forum supporting multistakeholder participation. Singapore was the beginning. As they continue to be addressed, new, of course, will emerge. Questions on accountability will be key for this to work. But, it is too early to come up with a conclusive solution. What is required now is continuing the dialogue through a collaborative process.
The way I see it (and the NTIA statement instructs) all this work is to ensure the openness of the Internet – openness that has contributed to economic growth, has facilitated the exchange of knowledge, has introduced different ways of social interaction, has empowered people, has addressed issues of poverty and illiteracy and has connected billions of us together. The IANA functions play a critical part in what it means to have an open Internet.
Let’s get to work and get this right!
Turkey’s recent move to tighten government control of the Internet should again make us think of actions and how they impact the Internet. It is the latest piece in a string of similar decisions made by various governmental and non-governmental actors to use the Internet as a tool to acquire a certain degree of control. But, I think that the belief that anyone is entitled to any form of control of the Internet is quite misguided.
The truth is that the Internet has created a faux sense of entitlement. We see governments increasingly imploring measures to address what they consider vital to preserve their sovereign-based rights. We see something similar with businesses, only they want to preserve market power. But, as it has repeatedly been stated, such actions can have an impact on the Internet. Turkey is just the most recent example. Blocking, filtering and other mechanisms have been constantly employed to control speech or to address intellectual property concerns with the ultimate goal of controlling user behavior. This is precisely what Mr. Erdogan’s government is trying to do with this new Internet law.
The new law will allow authorities – deemed appropriate by the Turkish government – without a court order or any other due process, to block websites under the pretense of protecting user privacy and public order. According to the New York Times: “The new law is a transparent effort to prevent social media and other sites from reporting on a corruption scandal that reportedly involves bid-rigging and money laundering.”
Turkey’s law is driven by a combination of agendas. It is a political effort by Mr. Erdogan to manage corruption scandals that his government cannot seem to shake off. As the New York Times reported “in one audio recording, leaked last month to SoundCloud, the file-sharing site, Mr. Erdogan is said to be heard talking about easing zoning laws for a construction tycoon in exchange for two villas for his family”. It is also consistent with Turkey’s historical context of being a society that has never managed to become truly democratic. But, what makes Turkey an interesting case is that, over the past few years, industrially it has been thriving and its young population is realizing the benefits of the Internet. Turkey could seize the opportunity to use the Internet both for its economic and democratic evolution.
We are only starting to witness the effects of such actions. They are driving users to become increasingly savvier and more demanding. They push for freedom of information. Governments, on the other hand, often consider this freedom ‘unsafe’ and try to manage it.
But safety, in the Internet, like security, is a process; it is not a one-off commitment. As Leslie Daigle, discussing security, accurately put it in a blog post: “In my opinion, the big difference the past year's revelations about government surveillance make is a step function change in understanding of credible threats.” For the technical community the process of creating a more secure Internet involves the constant understanding of the network and an adherence to a set of basic principles – transparency, cooperation, accountability and due process.
For policy makers, the process should involve questions and loyalty to the same set of principles the technical community operates. And, questions have started being asked. Dutch Member of the European Parliament, Marietje Schaake, said in a blog post: “These new laws strengthen the grip of the Turkish government on what can and cannot be published online and they restrict access to information. Freedom of speech and press freedom are already under a lot of pressure and Turkey is the largest prison for journalists. Since 2007, many websites have been blocked. The European Commission needs to show that the rule of law and fundamental freedoms are at the centre of EU policy. The Gezi Park protests, last summer, have shown that the Turkish people long for freedom and democracy, we must not leave them standing in the cold.”
For many years, I have observed that the Internet is adopting many self-regulation frameworks to address a variety of issues. Indeed the Internet has benefited from self-regulation as an efficient way to address jurisdictional conflicts - particularly as compared to traditional law making. Since the Internet is global, jurisdiction is often the most difficult policy issue to address. To this end, voluntary initiatives are becoming increasingly popular in the digital space due to their ability to address dynamically issues related to the Internet. Voluntary, self-regulatory or industry-based are all terms used to identify initiatives that are produced and enforced by independent (private) bodies or trade associations and focus on addressing issues that have a limited scope and are of a specific subject matter.
The United States Patent and Trademark Office (USPTO) recently issued a request for input on voluntary best practices in the context of intellectual property. In light of this request and considering the newly formed Copyright Alert System (CAS) and other similar policy exercises around the world, the Internet Society offers its own reflections on voluntary policy initiatives. By outlining the advantages and disadvantages of self-regulation and identifying a set of best practices for self-regulation that include the need for periodic reviews, external and internal checks and, transparency, amongst others, the Internet Society wishes to promote the thesis that voluntary-based initiatives can prove efficient if they are carefully balanced and do not depart from the established principles and processes of rulemaking.
The scale of Internet growth is reflected in the shear complexity of crafting Internet policies and regulations; it is an undisputable fact that the Internet has put to test the efficiency of traditional law making and its ability to deal with emerging technology trends. In order to deal with the increasing gap between legal and technology frameworks, many policy makers turn to systems of self-regulation and voluntarism. These systems are not new – since lex mercatoria (law of the merchant) in the middle Ages, self-regulation has been a tool for regulators and policy makers to deal with complex commercial issues in an expedient manner.
Self-regulation mechanisms can be efficient and offer a plethora advantages, but they should not be considered a panacea. A clear rationale regarding their mandate and parameters is essential for self-regulation frameworks to address their intended purpose and stand the test of time. At the same time, tools for measuring the effectiveness of these approaches should also be in place to ensure that the outcomes are consistent with expectations and that they continue to meet the public interest over time.
Since the Internet is a global network of networks, national Internet public policies, whether they are based on self-regulation or traditional regulation, often have impacts beyond national borders. Thus, as voluntary-based mechanisms gain traction amongst policymakers as an alternative way to address complex online legal issues, these mechanisms will also have global implications. This is particularly true because, while self-regulatory tools are evolving in different jurisdictions, there are lessons to be learned across all of these experiences. Further, there are global policy lessons to be learned in terms of the effectiveness, processes and sustainability of these kinds of policy tools.
What follows is a set of thoughts concerning self-regulation. Self-regulatory frameworks are appealing because they can be narrowly tailored to deal with specific legal issues but these tools are not the solution to all problems. In fact, successful self-regulation can only happen within established and legitimate frameworks of rule-making.
Advantages and Disadvantages of Voluntary-based Initiatives
The Internet Society is, generally, in favor of industry-based initiatives to address various issues, including those related to intellectual property; however, we are also mindful of the risks associated with these approaches. Whether based on theories of delegation or contract law, facilitated by the State or being a by-product of a “self-enforcing power, stemming from the direct deprivation of a valuable right”, the role of private bodies in self-regulatory environments is key. At the outset, such private entities could prove beneficial in overseeing market participants’ actions through different processes such as standard setting, certification, monitoring, brand approval, warranties, product evaluation and arbitration.
Academic literature and market practices (e.g. the European Advertising Standards Alliance in Europe) indicate that for self-regulatory mechanisms to be successful they should include standards for real consent, which help ensure the legality and legitimacy of contractual agreements as part of private regulation. In cases where consent is not present, public legal institutions are required to specify the criteria that entitle private regulatory regimes to acquiescence and immunity. But, ultimately, it is adherence to minimum standards of justice and fairness that determine the success of industry-based initiatives. Rules, consequential to private regulatory efforts should ensure that – to the extent possible – interested and affected parties are able to participate in voluntary based initiatives on an equal footing.
Based on this set of minimum standards, private regulation offers some notable advantages in allowing the market to take the lead, offer a multitude of alternatives and ensure that fundamental values are protected by allowing interested parties to participate in the formation of rules and principles that are not subject to the cumbersome processes of traditional law making. As David Post, professor at Temple University and Fellow at the Center for Democracy and Technology, accurately put it: “We don’t need a plan but a multitude of plans from among which individuals can choose, and the market [...] is most likely to bring that plenitude to us”.
In a much similar vein, Robert Pitofsky, former Chairman of the Federal Trade Commission (FTC), referring to industry-led regulation, enumerated the following advantages:
· Self-regulatory groups may establish efficient product standards;
· Private standard setting can lower the cost of production;
· Private regulation helps consumers evaluate products and services;
· Self-regulation may deter conduct that is universally considered undesirable but outside the purview of civil or criminal law; and,
· Self-regulation is more prompt, flexible and effective than government regulation.
Industry regulation, however, also has significant disadvantages. The recent failure of self-regulatory models in the financial markets leads many to question industry-based regulation as an efficient model. To this end, some scholars have challenged the legitimacy of private bodies, such as cyber-authorities, to deal with issues emanating from the Internet. Their main concern relates to the ability of such authorities to create policy and enforce rules that traditionally fall within the remit of the democratic state. In the words of a US scholar: "State-centered law - both legislation and constitutional adjudication - carries considerable weight in legitimizing creation beliefs and practices and delegitimizing others. […] A cyberauthority, in contrast, would have to start from scratch". 
One of the most worrying aspects of private regulation is, arguably, that many of its advantages are based on false premises and loose criteria. Amongst other things, private regulation may easily fail to protect democratic values; it can neglect basic standards of justice; and, it is often less accountable compared to traditional governmental rule making. More importantly, because of the Internet, self-regulation is increasingly initiated and imposed by new Internet sovereigns that do not necessarily operate within traditional principles of rule making. To this end, private regulation often suffers from lack of accountability and due process. .
Effectiveness of Voluntary Initiatives
Various countries, including the United States and the United Kingdom, have consistently supported meaningful, consumer-friendly, self-regulatory regimes for various issues ranging from privacy to intellectual property. As the United States government has stated: “To be meaningful, self-regulation must do more than articulate broad policies and guidelines”.
The Internet Society fully agrees with this premise – self-regulation emanating from voluntary based initiatives should extend to incorporate specific and reliable principles that allow participants and consumers/users to have a clear understanding regarding the delineation of the parameters, the scope of self-regulation and the accountability mechanisms for the public interest.
We will approach the effectiveness of self-regulation from the perspective of a) Accountable and Transparent Information Practices; and, b) Characteristics of Effectiveness.
A) Accountable and Transparent Practices
1. Access: At a minimum, users need to be provided with the option of having access to information regarding every voluntary-based mechanism that might affect them and their online experience. In this respect, every actor engaged in voluntary practices should take reasonable steps to ensure that users are kept updated and informed about the process and substance of such self-regulatory initiatives.
2. Enforcement Policies: Enforcement policies articulate the steps that will be taken when illegal action is detected. On this basis, users should be able to understand the scope of enforcement and the parameters of their activity.
3. Notification: Enforcement policies, especially those emanating from self-regulatory schemes, should be made known to users as much in advance as possible. Notification written in language that is clear and easily understood, should be displayed prominently, and should be made available before users are asked to sign any contract regarding their Internet connection.
4. Education: Two things are important in this context: first, education should reflect unbiased opinions and should be conducted by 3rd party trusted sources, including academia. Similarly, education should not be limited only to users but should extent to every single entity or individual who is part of the Internet ecosystem.
5. Data Security: Given the volume of data collected in such industry-based schemes, private bodies creating, maintaining, using or disseminating records of identifiable personal information must take reasonable measures to assure its reliability and take reasonable precautions to protect it from loss, misuse, alteration or destruction.
B. Characteristics of Effectiveness
For a self-regulatory regime to be effective, it needs to include mechanisms that assure compliance with its rules and appropriate recourse to an injured party when rules are not followed.
1. Due process: every voluntary-based initiative should adhere to basic and fundamental principles of justice and fairness, including, but not limited to, the right of a hearing, legal certainty and adherence to the rule of law.
2. Judicial safeguards: all voluntary-based initiatives should encompass internal and external checks and balances. One such balance is the right of an appeal. However, this right is not self-sufficient and should be accompanied by a process that is affordable and accessible; it should further incorporate rules that are clear and incentivize its use. Finally, it also requires independence and impartiality of all the participants.
3. Transparency: Disclosure of information to the public about voluntary schemes is another significant feature of voluntary-based initiatives. This information should include, but should not be limited to, the system’s rationale, end goal, how it affects interested parties, etc.
4. Balanced and proportionate rules: Voluntary based mechanisms should strive towards creating rules that are balanced, reflect the rule of law and are proportionate.
5. Trust: Trust is becoming increasingly important in the spheres of policymaking and law crafting. Any voluntary-driven initiative should seek to build and create an environment of mutual trust first, amongst the actors setting up the system, but also between the actors to which the system is addressed.
6. Periodic reviews: All systems, including public ones, should be periodically reviewed and evaluated as to their effectiveness. Such reviews test the efficacy of policy mechanisms and their ability to provide answers to the issues they were originally created to address. In the context of the United States’ Copyright Alert System (CAS), for instance, the need for a robust review after its first year of operation is key in identifying potential gaps and omissions, a possible revision of its safeguards, a reframing of its deliverables and the precise role of the various actors.
7. Public Interest: To the extent that self-regulation aims at setting standards that principally reflect industry needs, there is a potential for the standards to reflect the industry’s interests rather than the public interest. It is, therefore, essential that self-regulation is neither collusive nor open-ended; it should not operate outside the wider regulatory framework or act independently. In such instances, the role of the government and public interest groups can aid in a monitoring function and lessen the opportunity for abuse and opportunistic changes to the self-regulatory mechanism.
Voluntary-based initiatives can prove valuable tools in the complex environment of policy making. Unlike public regulation, which is increasingly being seen as too slow to address the needs of a fast-paced Internet environment, self-regulation can provide efficient answers to important legal questions. But, self-regulation should not be seen as a cure for all the issues appearing in cyberspace. It is important that mechanisms based on industry initiatives include very specific and solid provisions relating to due process, fairness and justice; in addition, periodic review mechanisms as well as internal and external checks, including the right of an appeal, should also be parts of voluntary-based initiatives. To this end, a careful balance regarding goals and scope is necessary in order to ensure that self-regulation does not become a vehicle of abuse or misuse.
 Henry H. Perritt, Jr., Towards a Hybrid Regulatory Scheme for the Internet, 2001 U. Chi. Legal F. 215
 David G. Post, What Larry Doesn't Get: Code, Law, and Liberty in Cyberspace, 52 Stan L Rev 1439, 1458 (2000)
 Neil Weinstock Netanel, Cyberspace Self-Governance: A Skeptical View from Liberal Democratic Theory, 88 Cal L Rev 395, 497-98 (2000)
 Henry H. Perritt, Jr., Towards a Hybrid Regulatory Scheme for the Internet, 2001 U. Chi. Legal F. 215
This blog post originally appeared at the Internet Society Public Policy page.
Konstantinos Komaitis, the individual!