Competition Regulation and Internet Policy

If you’re interested in domestic internet governance in Canada, you need to know something about competition regulation. The same is true in much of the rest of the world where the telecom industry underwent liberalization (was opened to market competition) and also exhibits high levels of concentration and regulatory concerns about market power. For instance, Uta Meier-Hahn’s survey of network operators found that competition regulation was one the most common forms of interconnection regulation reported by participants. Here in Canada, telecom competition has been regulated ever since we moved away from monopoly control. This is why it’s inaccurate to describe what happened in the 1990s as deregulation. The neoliberal fantasy may have been to get government out of the way and turn everything over to market forces, but government decided it was going to take some purposeful regulation to get us there, and we never got there.

I’d like to distinguish between two basic kinds of competition regulation that matter: positive and negative (modifying this previous contrast I used to talk about ISP responsibilities). The first mode of regulation is the set of regimes, like mandated wholesale, that specify how competitors are required to behave and relate to one another, and other ways of addressing imbalances or insufficient competition in the market. This includes the way that smaller companies or “new entrants” are given certain advantages and protection (“set-asides”) in spectrum auctions. All of there rules are justified as promoting more, better, or fairer competition — they are positive forms of regulation, in that they create, cultivate, and encourage that which is desirable. They are premised on the idea that competition is a problem and that liberalization is incomplete. In other words, the market is not competitive enough and whatever the goal that the policy transformation of the 1990s was meant to achieve, has not been reached. The state can structure and configure conditions so as to improve things, or to set up market actors in a way that increases competitiveness. These are the kinds of competition regulation that matter most in the day-to-day of the telecom industry, and are often structured through a system of CRTC decisions (ISED when it pertains to spectrum).

The second set of regulations are essentially negative — they ward off the undesirable. Where positive regulations try to seed and fertilize the field (giving more fertilizer to the plants that need it the most), negative regulations tear out the weeds. This metaphor helps to show how this distinction is not entirely neat, since tearing out weeds creates better conditions for growth (there is a positive aspect to negative regulation and vice versa), but hopefully you get the idea — this is a heuristic. Both are forms of regulatory action, but the first promotes the good while the second restricts the bad. Negative regulations focus on what will not be tolerated and work to eliminate or prohibit these. They impose sanctions or consequences for undesirable conduct, drawing lines across which market actors shall not cross.

Canada’s Competition Bureau is a key actor when it comes to these negative forms of regulation, not only in the way it punishes abuses of market power (albeit rarely in telecom) but also the distinctions it makes when approving or rejecting mergers. There is a positive dimension here, in that a merger or consolidation can be approved along with conditions that are meant to promote competition, and the Bureau generally holds that mergers are good for competitiveness, but it also draws lines that big businesses wishing to swallow competitors will not cross. These lines can be quite permissive, as in Bell’s recent acquisition of MTS, but with so few major players left in the telecom market, further consolidation among these giant firms (the recurrently raised prospect of a Bell-TELUS merger) would be tricky. While positive regulations try to foster competition, negative regulations prevent us from slipping back to monopoly.

This is why issues around concentration of power and competition are so fundamental for internet governance — domestically, they make the difference between a world of multiple interconnected networks, and a world under monopolistic control. On that note, Dwayne Winseck and his team at the Canadian Media Concentration Research Project have been an important resource for tracking shifts in consolidation and concentration in Canadian media, ISPs included. With the latest annual update just released, I encourage you to check it out for lots of details and background. One of the takeaways is that when it comes to internet access in Canada, things are holding relatively steady. This means that the positive regulations aren’t being very successful in effecting change in the market, while the negative ones help maintain the status quo.

Review of Susan Landau’s — Surveillance or Security?

I’ve been going through my files recently, and discovering some that I had forgotten. A couple of times now I’ve had submissions to journals fall into a void. Ideally, when this happens the piece can still find a home somewhere else, but this was a review of book from 2010 written in 2012, and in 2013 Snowden changed the world and I felt the need move on. Still, Landau’s book remains valuable and some of these issues are even more salient today (also of note, in the 1990s Landau co-wrote Privacy on the Line with Whitfield Diffie).

Book Review: Landau, Susan. 2010. Surveillance or Security?: The Risks Posed by New Wiretapping Technologies. Cambridge, MA: MIT Press.

The choice between security and civil liberties remains a commonplace way of framing many surveillance debates. Susan Landau’s argument in Surveillance or Security? is that many surveillance technologies and systems not only compromise privacy, but may actually make us less secure. This thesis, while worth repeating, will not be novel for some readers familiar with surveillance and security debates. However, readers who are already well-versed in criticisms of the freedom-security opposition will still find a great deal of value in Landau’s book, including the nuance of her more policy and technology-specific arguments and the wealth of detail she provides on various electronic surveillance practices. The patience and clarity with which Landau walks readers through this detail is commendable, and the book makes many technical and legal matters understandable to those unfamiliar with telecommunications, electronic surveillance, or U.S. law. Despite this, reading Surveillance or Security? from beginning to end requires a considerable interest in the subject matter, and much of its detail will be superfluous to those interested in more general surveillance questions or electronic surveillance in a non-U.S. context.

The nuance of Landau’s argument preserves a legitimate and lawful role for surveillance by state actors, and her critique is targeted specifically at emerging forms of surveillance made possible in the age of digital networks. Of greatest concern is the ability to embed surveillance capabilities into our increasingly-capable communications infrastructures. Justifications for expanded or “modernized” police and national security surveillance capabilities are often premised on the need to bring telephone-era laws and abilities up to date with the internet. Landau provides a very effective introduction to telephone and packet-switching networks, the development of the internet, and the contemporaneous changes to U.S. surveillance law and practice. In the process, she shows how the nature of communication and surveillance has been transformed, and how inappropriate the application of telephone-era surveillance logic can be for internet architecture. While telephone and packet-switching networks are now deeply integrated, the reader will learn just how difficult “wiretapping the internet” is when compared to traditional telephone wiretaps. On the other hand, the book also discusses the vast amounts of information available about our digital flows, and how these possibilities of data collection introduce new dangers.

The most forceful of Landau’s arguments are against the embedding of surveillance capabilities into our networked communications infrastructure, as this amounts to an “architected security breach” (p.234) that can be exploited or misused. The main example provided by the author of such modern wiretapping gone wrong is the activation of surveillance capacities embedded in the software of an Athens mobile phone network during 2004 and 2005, wherein parties unknown targeted the communications of Greek government officials. While this case of wiretapping was highly selective, Landau also cites the current U.S. “warrantless wiretapping” program to illustrate the dangers of overcollection. A third case, the FBI’s misuse of “exigent letters” to acquire telephone records after September 11, shows how the risk of overcollection is exacerbated when wiretapping cannot be audited and fails to require “two-organizational control”. In the exigent letters case, FBI investigators and telephone company employees working closely alongside one other were able to nullify institutional boundaries and circumvent legal requirements. From these cases, Landau concludes that “making wiretapping easy from a technical point of view makes wiretapping without proper legal authorization easy” (p.240). Among her chief concerns is the historical propensity to take advantage of surveillance-ready technologies to target journalists and political opponents, and the possibility of “nontargets” being caught up through overcollection.

Surveillance or Security? offers solutions as well as warnings, and these are primarily oriented towards safeguarding communications security. As a general prescription, Landau argues for partitioning our networks to a greater and more sophisticated degree. This includes increased use of identity authentication and attribution for particular networks, and keeping others entirely inaccessible from the public internet. But Landau expressly opposes building identity authentication and surveillance mechanisms (such as deep packet inspection) into the internet itself. Overall, this is a sensible solution that can address “digital Pearl Harbor” fears while preserving the general openness of the internet. Our networks already have “walled gardens” for governments and corporations, and Landau calls for more effective partitions as well as open public vetting of security mechanisms (pp.240-241). Sanctioned wiretaps should also be auditable and not under the independent control of any one organization.

Ultimately, questions about how the internet should be designed and governed boil down to what we value in the network. Many have pointed out that that the values which drove the development of the internet did not include ensuring its security, so that concerns over identification, authentication, malware and cyberattack surfaced later in its development and are difficult to resolve. The debate over whether internet governance and internet architecture needs to be revised in the interests of security continues to this day, but the choice is not simply between security and openness. Rather, “security” can point to a whole host of challenges, some of which can be in opposition to one another. Landau does indeed distinguish between different security threats, but while there is a chapter entitled Who are the intruders?, no equivalent breakdown is given of “whose security” is of primary interest. Instead, Landau treats personal security, national security, and corporate security as compatible and amenable to some of the same solutions. She explicitly values personal privacy and the open innovation made possible by the internet, but also warns against growing foreign threats to the economy and critical infrastructure of the United States. The closing sentence of the book calls for communication security “to establish justice, maintain domestic tranquility, and provide for common defense” (p.256), and it is in the tensions between these three objectives that the supposedly false choice between freedom and security materializes once again.

Landau promotes the value of privacy and journalistic freedom, puts the danger of terrorism “in context” (p.222), and warns against heavy-handed approaches to illegal file-sharing (pp.34-35). But in debating the appropriateness of embedded surveillance or privacy-enhancing cryptography, the reader also learns that “we must weight the costs” (p.35) or the advantages against the disadvantages (p.219) of such technologies and practices. The problem is that different readers may have rather different conception of who is denoted by the “we” in such a formulation, and where the costs accrue. If the security threat is the “havoc” that can be wreaked through an internet connection multiplied by the size of the cyber-capable Chinese army (as Landau suggests in the epilogue, p.255), then Richard Clarke and Robert Knake’s (2010) proposal to embed surveillance and filtering at internet service providers (ISPs) to deal with foreign cyberattacks might seem quite reasonable (such surveillance would receive “rigorous oversight by an active Privacy and Civil Liberties Protection Board to ensure that neither the ISPs nor the government was illegally spying on us” [Clarke & Knake 2010, p. 162]). The principles which guide Landau’s judgments are those embodied in the U.S. Constitution, the open and innovative possibilities of our networks, the right to privacy in communication, and the need to be protected from electronic “intruders” and “threats”. But in making these various appeals Landau is also providing the means to undercut her argument against embedded surveillance, if one values a particular type of security or fears a threat to security over others. She closes with an appeal to consider communications security as vital to both national and personal security, to democracy as well as defense (p.256), but the argument that embedded surveillance makes us less secure is on weaker footing when faced with the catastrophic specter of a cyber-war with China.

In the end, readers may find themselves confronting the dilemma identified by Jonathan Zittrain (2008, pp.60-61), who argues that “the cybersecurity problem defies easy solution, because any of the most obvious solutions to it will cauterize the essence of the Internet”. Like Zittrain, Landau thinks we can improve cybersecurity without sacrificing the internet’s propensity for openness and innovation, but at times she seems to address her arguments more at U.S. policy makers, security officials, and American citizens than at a general readership. The book includes a chapter devoted to analyzing “the effectiveness of wiretapping” in the furtherance of national security and criminal investigations, and the threat of China’s espionage and cyberattack capabilities looms large against a “United States that is being weakened by the very information technologies that brought the nation such wealth” (p.171). Landau’s approach may appeal to those Americans in greatest need of convincing, but it marginalizes arguments based on more critical premises, such as the potential of open networks and private communications to facilitate valuable forms of disruption and social change.

Surveillance or Security? focuses on the U.S. because the complexity of wiretapping policy is better explored through one nation’s economic and legal perspective, and Landau claims that “it should not be hard to reinterpret the issues from the perspective of other nations” (p.10). The networks that constitute the internet certainly warrant analysis on the level of the nation-state, in particular due to the increased assertion of territorially-based state power over and through the internet. The U.S. also deserves study in its own right by anyone interested in global telecommunications, not only because of the influential role of the U.S. in the history of telecom, but because the world’s telecom networks remain disproportionately dependent on U.S.-based institutions and infrastructure. The layout of global fiber-optic cable makes the U.S. “a communications transit point for the entire world” (p.87), and the overall layout of the World Wide Web also remains largely U.S.-centric.

However, many of the details of U.S. wiretapping legislation and practice will not be of interest either to the general reader or to the scholar interested in broader questions of surveillance and telecommunication. The book’s detailed analysis of the U.S. case is therefore its greatest strength, or, for a more general audience, its greatest weakness. Among other strengths are the clarity of Landau’s descriptions of network architecture and internet history, which do not presume prior knowledge on the reader’s part. Surveillance or Security? is clear and approachable, and contributes some much-needed scholarship on the intersection between state and private institutions underpinning contemporary surveillance systems. At its best, it pours cold water on the need to overhaul the internet and expand the scope of electronic surveillance, but Landau is not above fanning the flames to give the issue of communication security some added urgency. In between, surveillance scholars will find plenty of value in the book’s well-researched detail and Landau’s considerable expertise.

One of the headings in the book, What it means to “get communication security right”, remains an open question, with governments moving slowly on the issue, and private institutions largely pursuing their own policies. While it seems clear that securing our communications networks will not be quick or easy, a more immediate concern are poorly-considered proposals to embed and institutionalize surveillance regimes and their attendant harms. Surveillance or Security? contributes to an important conversation, injects caution into a frequently overheated discussion, and offers much of substance for those acquainting themselves with communications security and surveillance.


Clarke, Richard. A., & Knake, Robert. (2010). Cyber War: The Next Threat to National Security and What to Do About It. New York: Ecco.

Landau, Susan. 2010. Surveillance or Security?: The Risks Posed by New Wiretapping Technologies. Cambridge, MA: MIT Press.

Zittrain, Jonathan. 2008. The future of the internet–and how to stop it. New Haven: Yale University Press.


Bell, the British Columbia Telephone Company, and Cold War Surveillance

Late last year, a story broke about a researcher trying to get the Privy Council Office to release a secret surveillance order from the 1950s. This once again demonstrated why news investigations are vital for holding government accountable: the day after the CBC published its story the PCO decided to release the file, and Dennis Molinaro could finally get to finishing a journal article on the topic. More recently, he published the source documents he got from the PCO as a pdf, which if you’re a security & surveillance geek like me makes for great reading alongside his journal article (big up Dr. Molinaro!).

As a result, our understanding of Canadian state surveillance and Cold War security practices has had a significant boost. Something I discovered a couple of years ago was the difficulty of figuring out what police telephone surveillance in Canada was like prior to the era of the Privacy Act (the 1970s and earlier). These documents give us only a view into one particular surveillance program, and only in its early years. The file deals with the period around 1954 when the RCMP’s very very secret PICNIC program needed to be reauthorized, and there was a need to expand its wiretapping beyond Bell to other companies. Interestingly, one option (initially favored by Bell’s lawyer) was to use section 382 of the Railway Act, which allowed the government to take control of telephone infrastructure (“place at the exclusive use of the Government of Canada any electric telegraph and telephone lines, and any apparatus and operators which it has”), but this also required and Order in Council. To put the program on firmer legal footing, the government wanted the company’s cooperation in accepting warrants under the Official Secret Act (something the British Columbia Telephone Company was already happy to do). Some readers may wonder how railway regulation got connected to this mess, and maybe I’ll explain the pre-CTRC link between rail and telecom in another blog post. However, the government of the day, under Prime Minister Louis St Laurent, feared that using the Railway Act as a “cover plan” to govern surveillance was too much of a stretch, though they seemed prepared to go that route if Bell didn’t see things their way, and prepared some dubious legal justifications for doing so.

Bell’s position gave the government significant “difficulties”, and I would love to know the company’s reasoning. Presumably, using the Railway Act as a secret justification would simply have been easier, without having to bother with the paperwork of warrants. But the company was persuaded to agree with the government’s view, and the resulting surveillance regime targeted “subversives” and national security threats, where warrants were written for “a given area” rather than individuals, and seems to have carried on through the 1970s. This was the decade when Canada’s initial privacy and wiretapping laws were developed, replacing the previous jurisdictional patchwork.

The documents released by the PCO give us a fascinating insight into early domestic telecom surveillance in Canada, but this was certainly not representative of how police investigations were carried out in Canada. The RCMP’s (variously renamed) Special Branch/Security Service carried out tasks currently performed by CSIS, with a list of targets informed by a Cold War ideology that saw homosexuals, anti-war activists, and unions as a national security threat. Today, the internet and international terror networks are sometimes blamed for making foreign and domestic communications indistinguishable, but during the Cold War domestic surveillance was routinely carried out under the presumption that the targets were actually foreign agents or channels for foreign influence.

PICNIC was surveillance that was never intended to see the light of day, and it seems that early criminal investigations by Canadian police using wiretaps were also generally not meant to be revealed as evidence in court (it was apparently against RCMP policy to use wiretaps in 1973 and 1974, but they were still used for criminal intelligence). Molinaro writes about how “The monitoring of Canadians required a close level of partnership with corporate society; in this case, with telecommunications companies like Bell Canada”. However, I was reminded of a 1977 wiretapping story where the RCMP finally decided to use wiretap evidence in a drug case, and an officer explained in court about his routine practice of looking like a Bell employee and simply breaking into an apartment building’s terminal room with a screwdriver whenever he needed to tap a phone. In these cases, police did what they wanted with the phone network and there’s no indication that company executives ever complained (if they were even aware).

Kind of reminds me of this other time Canadian police decided to hack the phone network without permission


The CRTC and the Public

Summer is drawing to a close, so it’s back to the usual schedule for me. There was no blog post last month, but if you were paying attention you will know the news that the CRTC has a new Chair. Jean-Pierre Blais is out and Ian Scott is in. I have little basis for predicting what happens next (though the status quo tends to be the safest bet), so let’s look back before we look forward.

Blais’ term was served in the context of the internet era. Blais was the first Chair to grapple with a more mature ‘internet ecosystem’ — that is to say, a political economy that is showing some stability around a limited number of giant players: content providers (Facebook, Google/Alphabet, Netflix) and incumbent ISPs. In this respect, he recognized a need to deal with certain issues (net neutrality), and generally avoided making big, stupid mistakes.

But as many described it, Blais’ term can be defined by the CRTC’s focus on putting consumers first, which means the industry didn’t always get to decide what was in a consumer’s interest, and incumbents didn’t always get their way in the decisions. This should be situated in a wider context, stretching back to the origins of the CRTC’s regulation of Canadian telecom.

In a Globe and Mail article from (Aug. 6) 1976, titled The ‘consumer’s empty chair’, Geoffrey Stevens writes about the CRTC’s new objectives. 1976 was the year the CRTC first assumed responsibility for telecom regulation, which was previously handled by the Canadian Transport Commission (CTC). The change was meant to herald a new era of openness, and would “facilitate the involvement of the public in the regulatory process”, allowing interveners like consumer groups to participate “in an informed way”. It would be a move away from the “court-like atmosphere” of the CTC and towards something more informal. Also, copies of applications would be disclosed to parties that might want to intervene, and telecom companies like Bell would have to disclose information in public that they would previously file in confidence to the CTC (justifications for costs and prices).

The last of these was particularly irksome to Bell, whose lawyer subsequently warned the CRTC that such disclosures would hurt the company, and if all competitors had to similarly disclose they would be “hurting each other”. Well, more than forty years later confidential submissions and costing information remains a controversial issue, and Stevens’ question about the “consumer’s empty chair” remains outstanding: who will represent the public interest before the CRTC (or who will pay for the public’s lawyers)? There has certainly been progress, and much of it has been during Blais tenure. In addition to PIAC, there are now a significant number of new individuals and organizations participating in CRTC proceedings through different means. This allows the CRTC to claim broader legitimacy for its decisions, but participants are far from equal, and the Commission gets to decide how much to weigh their opinions. It’s still public participation bolted onto a complex regulatory apparatus, without much in the way of support (or a CRTC website that people can effectively use).

At a time when the FCC is experiencing somewhat of a crisis over transparency and openness to the public, the CRTC is in better shape, but still has a long way to go. Over to you Mr. Scott.

CSE’s Cyber Shakeup

The House of Commons is now on summer break, but before everyone headed off, the The Trudeau/Goodale Liberals introduced a monumental rework of Canadian intelligence and security institutions. This accomplishes some of what the Liberals previously indicated, but as Wesley Wark points out, such substantial changes to Canada’s national security bureaucracy are surprising. The implications are complex, with major reform for those overseeing CSIS and CSE (two new institutions: the National Security and Intelligence Review Agency and the Intelligence Commissioner) and changes to CSE’s mandate.

Experts and politicians have some time to chew on this bill’s different aspects, and for all things CSE, an important view is the Lux Ex Umbra blog. However, here I want to offer a couple of thoughts on the cyber aspects of the reforms.  As others have pointed out, these reforms will help to normalize certain types of acts (network exploitation and attack). One argument is that Canada’s new framework will help normalize in the international arena what a lot of states have been doing covertly, under dubious legal authority — “effects” like hacking and exerting influence in various domestic and foreign jurisdictions. The Canadian approach could either be a model for others interested in legal reform, or contribute to making these actions more acceptable and legitimate around the world. Domestically, this is also a normalization of the sorts of things that CSE has done, or wanted to do, for some years now.

There’s an upside and downside here. If you assume that this is the sort of stuff the Five Eyes and CSE would be doing anyway, it’s good to have it under an explicit legal framework that can “reflect the reality of global communications today and participation in international networks such as Five Eyes”. From this view, the reforms are an improvement in accountability and oversight. On the other hand, if you think this is precisely the sort of thing governments should reject (and the focus should be purely on cyber defence and passive techniques), then the last thing we should do is put a government stamp on it. Instead of updating the law to legitimate what has been going on, we need to stop the most controversial activities revealed by Snowden (weakening crypto, hacking Google data links and compromising LinkedIn accounts of Belgian telecom engineers).

In Canada, we have never had a debate about these questions. The national security consultation that ostensibly informs this move was not designed to ask them. Canada’s role in the Five Eyes is not under revision, and Bill C-59 is meant to better “align ourselves” with these cyber “partners”. The partners are meeting this week, amid an active push by allies (specifically, Australia) to get Canada’s cooperation in countering encryption. There’s little indication where Canada stands on these questions today. However, given what appears to be our holding-steady with the Five Eyes and C-59’s new legal framework, CSE can still end up promoting insecurity, in secret, at our allies’ request.

Ultimately, the success of C-59 will depend on how effective the new accountability mechanisms are. Canada’s previous experience includes government assurances about legal compliance and oversight, while routine illegality and surprising legal interpretations are carried out in secret. Some of this previous experience (like the CSIS ODAC database) is addressed in C-59, but on the must fundamental question — what kind of security will Canada promote in the world? — we seem to be doing what Canada has done since we hitched our national security to the U.S. in late WWII: defaulting to our allies. We may have some bold new security legislation (and a Minister of Foreign Affairs who recently made big statements about the need to “set our own clear and sovereign course“),  but old concerns about the lack of a distinctly Canadian approach to international and cyber security are as relevant as ever.

On Infrastructure

Shaun Stanley/Durango Herald

Recently, I was reading through an edited collection titled The turn to infrastructure in Internet governance. Few of the chapters held my interest for long, and for a book supposedly about the infrastructure ‘turn’, too many of the topics had already been well-covered in the internet governance literature (like organizations devoted to internet governance and the DNS). In the book’s introductory chapter, DeNardis and Musiani write:

…there is increasing recognition that points of infrastructural control can serve as proxies to regain (or gain) control or manipulate the flow of money, information, and the marketplace of ideas in the digital sphere. We call this the “turn to infrastructure in Internet governance.” As such, the contributions in this volume… depart from previous Internet governance scholarship, by choosing to examine governance by Internet infrastructure, rather than governance of Internet infrastructure. (p.4)

I largely want to put aside the question of how well the contributions in the book achieve this, and just focus on the topic of ‘governance by infrastructure’, and what this means. First, governance by infrastructure necessarily implies governance of infrastructure, but the emphasis shifts to particular features of infrastructure as points of control through which various social processes can be governed. So what do we mean by infrastructure? For DeNardis and Musiani, citing Bowker and colleagues:

the term “infrastructure” first suggests large collections of material necessary for human organization and activity—such as buildings, roads, bridges, and communications networks. However, “beyond bricks, mortar, pipes or wires, infrastructure also encompasses more abstract entities, such as protocols (human and computer), standards, and memory,” and in the case of the Internet, “digital facilities and services [ . . . such as] computational services, help desks, and data repositories to name a few… Infrastructure typically exists in the background, it is invisible, and it is frequently taken for granted. (p.5)

When it comes to the internet, infrastructure is more than just the ‘plumbing’ — it includes ‘abstract entities’ and social organizations, and this inclusive understanding might lead us to see all sorts of traditional internet governance studies as studies of infrastructure. So let’s try to narrow the focus to what makes infrastructure distinctive, besides the fact that it is frequently invisible.

Common definitions of the term discuss infrastructure as foundations, frameworks, and whatever provides support for something. There is a lot of overlap with the definitions of a public service or utility here, and this is why we typically think of electricity, water, and roads as infrastructure — without the underlying support of these systems or networks, countless social processes would grind to a halt. The early internet supported particular and specialized kinds of activities, but today it’s easy to see our digital networks as underpinning communications and social relationships in general, and therefore functioning as a kind of public good.

By seeing the internet as infrastructure, we might ‘turn’ to look at all of the ways it contributes to our daily lives. Much of this support is effectively invisible, and only comes to our attention when it stops working. The closer we get to the future promised by the Internet of Things, the more disruption will be experienced by these outages. This is reflected in the classification of telecom network as “critical infrastructure” — a category that has been the focus of government concern  in recent years, leading to a proliferation of partnerships, policies, frameworks, and standards.

Critical infrastructure is governed so that it does not break, or that it continues to provide essential services with minimal interruption. This is a developing and little-publicized topic (given the overlap with national security) so this sort of ‘governance-of-infrastructure’ has actually not received much internet governance scholarship. In contrast, the ‘governance-by-infrastructure’ that DeNardis and Musiani identify is about more than keeping the lights on and the data packets moving, and if we’re going to take this infrastructure turn seriously, one of the most important places to look is at ISPs as points of control. The idea that society can be governed by ISP responsibilities is now an old one, but remains a common approach. ISPs have obligations to connect to each other (or other institutions), and are called upon to monitor, increase, shape, limit or filter connectivity. Google and Facebook may have become massive operators of infrastructure, but last-mile and middle-mile networks remain essential chokepoints for internet governance.

ISPs are inextricably dependent on material infrastructure, since they are fundamentally in the business of moving packets to and from customers through a physical connection. Even wireless ISPs are limited by the laws of physics, as only so much information can be carried through the air (where it is also susceptible to interference). Accordingly, wireless ‘spectrum’ is carefully divided between intermediaries and managed (in Canada) by ISED as a precious resource – with spectrum licenses auctioned to intermediaries for billions of dollars (licenses that come with public obligations). Owning license for spectrum is quite a different matter from actually using it, and to serve millions of customers, further billions of dollars must be invested in a system of towers and their attendant links. The wired infrastructure of ‘wireline’ ISPs can be even more expensive, since cable must run to each individual customer, requiring kilometers of trench-digging, access to existing underground conduits, or the use of privately-owned utility poles. This means that the rights-of-way which secured the early development of telephone networks remain important for anyone deploying wired infrastructure, further privileging incumbents who own conduit or have access to utility poles. These rights-of-way are also one of the only ways municipal governments can control telecom infrastructure, by negotiating or referring to municipal access agreements. However, struggles between municipalities and intermediaries over access to right-of-way can also be quite contentious, and may also be adjudicated by the CRTC.

Finally, as with all things, I’m interested in the language we use to discuss these topics. Calling something infrastructure implies something different than utilities or ‘public works‘, but all three indicate a relation to an underlying public interest. Since so much of it lies in private hands, infrastructure is currently the preferred expression, but even this term reminds us that we all jointly depend on these corridors, poles, pipes, electronics, and the people who keep it all running.



Canada’s Net Neutrality Code

Last week the CRTC released an important net neutrality policy (Telecom regulatory Policy 2007-104) that got a lot of people talking. There’s been coverage by Dwayne Winseck, Michael Geist [1 & 2], Timothy Denton, Peter Nowak [1 & 2], and foreign reporting that understandably used the FCC’s approach in the U.S. for contrast. Jean-Pierre Blais reflected on the process in a recent interview (in which he also stated that the recent basic service decision was as close as the CRTC could come to recognizing broadband as a human right).

I’ve written about differential pricing before, and feel no need to summarize the decision here, or the decision-making framework it establishes, but there are some elements that stood out for me. First, this is the CRTC’s most explicit discussion of net neutrality ever. The term net neutrality didn’t even appear once in the earlier decision on differential pricing, and there has previously been a tendency to frame these topics in the regulatory language of ITMPs. Now the CRTC has embraced common lingo, and the latest regulatory policy is expressly “part of the broader public policy discussion on net neutrality. The general concept of net neutrality is that all traffic on the Internet should be given equal treatment by ISPs” [10]. Elaborating its definition of net neutrality, the CRTC states that “net neutrality principles have been instrumental in enabling the Internet to grow and evolve as it has”. These principles include innovation without permission, consumer choice, and low cost of innovation (low barriers to entry)[11]. Here we have the CRTC laying out some internet values — what made the internet so successful and what needs to be preserved (see Timothy Denton’s laudatory post). This document is remarkable because it lays out something approaching an ideal vision for Canadian telecom, with the internet as a central part. There were elements of this in the 2009 ITMP decision, which together with the recent differential pricing decisions (and subsection 27(2) of the telecom Act) now “effectively constitute Canada’s net neutrality code” [156].

For the rest of this post, I’d like to take a closer look at what the CRTC imagines or desires for Canadian telecom, specifically the roles of different actors and their relations. First, ISPs are common carriers [22], which generally means they are prohibited from discriminating or picking favorites among content. Chairman Blais has since said he thinks this CRTC decision will “reinforce the fact” that ISPs are “mere conduits”, playing a limited role in carrying information from one place to another. Once ISPs start making decisions about content they become gatekeepers to that content, and other concerns come into play (including net neutrality and copyright). Differential pricing can be used for just such a gatekeeping function, which would have “negative long-term impacts on consumer choice” as the CRTC predicts ISPs would make deals “with only a small handful of popular, established content providers – those with strong brands and large customer base” [67].

The scenario that worries the CRTC is one where vertically-integrated ISPs use their control over internet traffic to direct consumers to their own content or that of their partners. Differential pricing is one way of controlling consumer behavior, but arguments in favor of the practice say that it provides consumers with choice, and allows ISPs to innovate and compete through these offerings. In response to these arguments, the CRTC was forced to lay out its vision for innovation and competition. Unsurprisingly, the CRTC’s vision is for ISPs to engage in the noblest form of competition: facilities-based competition: “when ISPs compete and differentiate their services based on their networks and the attributes of the services on those networks, such as price, speed, volume, coverage, and the quality of their networks” [46]. The most important innovations aren’t “marketing practices” like zero-rating, but improvements to ISPs’ networks [59]. ISPs should focus on the internet’s plumbing, and consumers will choose the superior network.

While ISPs are imagined to be competing for customers based on the quality of their networks, competition for services is best served by the “open-nature of the Internet”, which allowed “today’s large, established content providers” to grow and innovate. “In the Commission’s view, new and small content providers should enjoy the same degree of Internet openness in order to innovate, compete, and grow their businesses” [56]. Since ISPs are envisioned as pipes, innovation in content should come from the edges of the network (or at least, that possibility should remain open). Content providers need to be able to enter the market and practice ‘permissionless innovation’, by giving consumers what they want without needing to cut a deal with each ISP that controls the last mile [11].

If we are trying to achieve something like a level playing field for content providers, then we can’t ignore the massive advantages that established content giants currently enjoy, and I wonder what else we might do to lower barriers to entry? Perhaps the whole idea of an ‘eyeball network‘ is an obstacle, where the network’s users are imagined principally as consumers watching a one-way information flow. This may be fine if it’s easy for a new content provider to compete for eyeballs, but that’s not the case today unless a you’re depending on an established content service (YouTube, Netflix) as an intermediary by having them carry your stuff. If we wanted to develop new ‘content’ in Canada, we need to recognize that in much of the country incumbent ISPs already act as the gatekeepers. If I wanted to start a new content service from my metaphorical garage, I would only be able to reach the global internet on my incumbent’s terms. These terms might include prohibitions on uses of their network, and the ISP’s control over addressing through NAT (imagine a world where every device could have a unique IP address…). Now imagine if I could easily get fibre to an internet exchange where I could connect to various international carriers… As with facilities-based competition, I think it’s important to try to imagine what an ideal world would look like when we’re talking about innovating and accessing diverse content over those facilities. As with facilities-based competition, I worry that the CRTC is most concerned with preventing existing concentrations of power from getting worse, than taking active steps to realize a specific vision.

Digital Futures in Alberta

This post will offer some reflections on the Digital Futures Symposium on broadband, held March 16 & 17 in Cochrane, Alberta, and updates on Alberta’s SuperNet.

I attended the first Digital Futures Symposium in Calgary in 2013, which turned out to be a great opportunity to learn about topics that I was becoming very interested in, like the SuperNet and the work that was underway to turn Olds into a gigabit community. At the end of that event, it was evident that there was a lot of frustration in rural Alberta over inadequate connectivity, but there wasn’t much going on to address this frustration. The organizers (academics with the Van Horne Institute) and some of the participants expressed a desire to keep working on these issues through some sort of ongoing collaboration. While I had my doubts about what this would produce, three and a half years later Digital Futures has become more relevant and useful than ever, and it is now just one of numerous efforts around to province to collaborate on rural broadband.

The group of academics organizing the Symposium has seen some change of personnel (one of the original Professors from Van Horne is now CRTC Commissioner Linda Vennard, who visited and participated in that capacity), and each of the meetings sees new faces coming with their own local concerns and questions. This latest Digital Futures was attended by some of the actors that were notably missing in 2013 — TELUS was one of the sponsors, and was there to make clear that it was very interested in working to meet the needs of local communities (some people were recently doubting the incumbent’s interest in rural Alberta). Axia, an original sponsor, also came with a substantial delegation.

Digital Futures hosts an interesting mix of municipal and regional leaders, and now also gets more attention from provincial government (at the federal level, there was also an ISED policy presentation). For me, the most interesting presentation was by Stephen Bull — Service Alberta’s Assistant Deputy Minister for the SuperNet Secretariat. By the sounds of it, Bull has made a significant impression in his first year on the job, and at last week’s Digital Futures he provided some important statements about SuperNet, at a time when the future of the network is at an important juncture (see previous post).

photo credit: @barbcarra

As previously mentioned, people in charge of SuperNet tend to spend a lot of time countering misconceptions about it, and so Stephen Bull’s presentation was organized around a series of SuperNet “myths” (a very different set than those addressed by Axia). Here are some of the most interesting bits from the presentation about where things currently stand:

-The SuperNet contract will be decided before the end of the summer. Axia, TELUS and Bell are pre-approved to submit for an RFP, but it sounds like the Government of Alberta (GoA) is still figuring out what it wants. A key question is what role different actors are going to play (local champions, different levels of government, the “ISP community”). The Premier has had one briefing on the issue, but asked for a second one — so this file has her attention, and things seem pretty wide open.

-What does the GoA think about SuperNet as it currently exists? Well, according to Stephen Bull, the primary rationale for the network (connecting public facilities) was achieved, but last-mile connections for rural properties are a big outstanding issue. Service Alberta counts 36 ISPs in Alberta (others at the Symposium counted 38-40), but that doesn’t mean there is last-mile competition across the province, and we should be realistic about what market forces can achieve (“Myth #6: The private sector will solve this issue”). SuperNet 2.0 seems like it will continue to have the goal of improving connectivity beyond public institutions, but Service Alberta seems aware that this has been a key weakness of SuperNet 1.0, and wants to improve how ISPs (including new, community-owned ISPs) connect.

-Stephen Bull didn’t have very positive things to say about the existing SuperNet contract, and provided some fascinating background about how Bell and the GoA’s interests were negotiated in 2005. The result was a poorly-written contract that’s open to interpretation, provides few enforcement options for the government, and isn’t clear on the roles of the different parties. Presumably, even if the fundamentals of the relationship stay the same (with Axia maintaining its role as operator), these issues will be cleared up.

-One big question is just what the current state of the network is, and a government audit is underway to figure this out. In 2035 the GoA has the option to buy the Extended Area Network (currently run by Axia) for $1, but what exactly would they be buying? It wouldn’t a singular network, because a lot of it is composed of leased fibre lines. Also, there are old electronics in need of repair (maintenance costs, previously covered by Bell, are actually a big part of the reason for revisiting the contract).

Some of the rural SuperNet infrastructure that the GoA will have the opportunity to buy for $1 in 2035

-The advice for communities is to “think very carefully before entering into any long-term agreement with an ISP before the future of SuperNet is known”. The worst-case scenario is a well-connected community that goes dark because something happens to the ISP (the advice being to include an option for a community to buy the infrastructure if an ISP leaves a jurisdiction).

-Finally, Stephen Bull expressed his perceived need for a provincial broadband strategy, which in his view would require finding a Ministry with the funding, capacity and will to do it (this is not the Service Alberta mandate). If no one is willing to take the lead on this at the GoA, folks at the Symposium wondered if we could produce something more bottom-up, and get the GoA’s blessing. Some of this is now being coordinated through the Van Horne Institute, and will be the next step for some Digital Futures participants.

Big-picture takeaways from the Cochrane Symposium:

In the two years or so since I attended Digital Futures 2015, broadband issues have exploded across rural Alberta. This hasn’t been uniform by any means (things are moving faster in the South than in the North), but whereas a couple of years ago it was tough to get many local governments to take the issue seriously, now councils have generally recognized how vital broadband is, and many are trying to improve connectivity. They’re working in partnerships with each other or making their own industry deals, principally with Axia. For a lot of regions and communities doing this, an early step is to get a consultant to tell them what the options are, and from the sounds of it Craig Dobson from Taylor Warwick has been winning the contracts for most of this work. A lot of rural communities are still at this research stage, but the hurdle of convincing rural governments that the internet matters has mostly been overcome.

What’s striking is the diversity of approaches to connectivity that are being discussed, although many of these exist only in potential. To paraphrase Lloyd Kearl (from Cardston Country and AlbertaSW), public solutions take time: you have to engage with citizens and various political, as well as commercial organizations. Private industry can move quickly, and indeed TELUS and Axia have been busy putting fibre in the ground, while public bodies deliberate taking a more active role in providing connectivity (the story of Olds is ever-present in these deliberations). In the next few years, we will see what these alternate approaches to connectivity in rural Alberta will amount to. In the short term, the big question is still what will happen with SuperNet 2.0…


Alberta’s SuperNet

Alberta is home to a remarkable fibre-optic network called the SuperNet, and the provincial government is about to decide what to do with it. This post will briefly summarize how this situation came to be, and what’s at stake in the forthcoming decision about “SuperNet 2.0“.

Just a slice of the SuperNet

At the end of the 1990s, Alberta was riding high on oil revenues and the promise of internet-enabled prosperity. The provincial government decided to invest in a network that would connect government and public buildings (such as schools and medical facilities) across the province. The need for public sector connectivity was combined with the need for rural internet access, and the idea was that last-mile ISPs would be able to plug into the SuperNet as a middle-mile network to reach towns and villages across the province. Economic development would be extended beyond the cities, bridging the digital divide. In those heady days, there was talk of luring Silicon Valley businesses, like Microsoft or Cisco, to rural Alberta. Entrepreneurs and knowledge workers would set up shop in small towns, rural patients could be diagnosed through telehealth, and university lectures could be beamed into remote schools.

The 2000s followed a decade of telecom liberalization and provincial privatization, including privatization of telecom assets (AGT), so the last thing the provincial government wanted was a publicly-owned network. Science and Technology Minister Lorne Taylor (credited with leading the SuperNet’s development) made clear that running telecom networks was the business of private industry, not government. The CTO of Alberta Innovation and Science emphasized that it was definitely not a government network. Government wasn’t going to build it, wouldn’t own it, and wouldn’t manage it. The private sector would be unleashed and competition would take care of the rest. All government had to do was throw in $200 million and set the terms of the deal.

As Nadine Kozak writes, the SuperNet was a contract, and not public policy. The contract was signed without public input or legislative debate. Citizens would be consumers of the network, and didn’t need to know the details of the deal, which was complicated and confidential. The contract would have to be renegotiated after construction fell behind and private sector partners Bell and Axia had a legal fight about not living up to their respective terms. The network was eventually completed without fanfare in 2005, with Bell eating the additional costs of the delay. Following another renegotiation of the contract in 2005, Axia would run the SuperNet for thirteen years (including the three-year extension granted in 2013), and the government would have the option of assuming ownership of the rural network after thirty.

Public infrastructure in many rural communities did receive a considerable boost in connectivity thanks to SuperNet, but the province never did become Silicon Valley North, and the last mile of the network only extended to public sector clients. It was imagined that private ISPs would connect to the network and compete with each other over the last mile for residential and business customers (see below), but in much of rural Alberta this never happened. Local incumbent TELUS preferred to use its own network, even choosing to (over)build additional facilities in places where it would have been cheaper to use SuperNet.

Meanwhile, government responsibility for the network shifted or split between departments through successive reorganizations. In 2010, Premier Redford stated, “We haven’t focused on it as a priority … (It) seems to have been more of a problem between government departments not wanting to take ownership, or not knowing exactly who’s the leader”. For those who don’t have to deal with it directly, SuperNet is just another piece of the invisible infrastructure that keeps our world running, and today, most Albertans have never heard of it.

Cybera CEO Robin Winsor shows the CRTC a piece of the SuperNet – Nov. 24, 2014

There are also some people who know about SuperNet, but don’t have entirely positive things to say about it. Robin Winsor, head of Cybera, stated that “although many good things have come from the build of the SuperNet, its capacity has been vastly under-realized and under-utilized“. Axia, the company that operates the network, has long worked to counter widespread “misconceptions” about the SuperNet, like the “myth” that the network is expensive and difficult to access. Axia has often ended up as the face of the network and the target of many of these complaints, even through in many cases the faults lie in the design and execution of the SuperNet contract, for which provincial governments have been ultimately responsible.

Axia is a remarkable company in the Canadian telecom industry,  and the SuperNet contract was key to making it what it is today. Axia has since promoted or developed similar open-access fibre networks in several countries, but seems to have recently re-focused on Alberta. When it comes to the SuperNet, its prime responsibility has been to run the network (as Axia SuperNet Ltd.). In this capacity, Axia serves public sector clients, and acts as an “operator-of-operators” for ISPs wishing to connect to SuperNet for backhaul. In line with the principles of running an open-access network, Axia is not supposed to compete with the last-mile ISPs, or offer internet access to residential and business clients through SuperNet. Axia has also helped produce lots promotional content over the years about the SuperNet’s accomplishments and the “unlimited possibilities” offered by this totally amazing network.

On the other hand, Axia’s actions indicate that the company clearly recognizes the limitations of SuperNet, and has worked to address these through Axia Connect Ltd., a separate business endeavour from Axia SuperNet Ltd. (see this recent CRTC appearance by CEO Art Price on the distinction). What Axia SuperNet Ltd. cannot legally do (act as a last-mile ISP), Axia Connect can and does. Whereas Axia SuperNet Ltd. does not compete with private industry in the last mile, Axia Connect has been putting many millions of dollars into last-mile connections, focusing its efforts on deploying FTTP to parts of Alberta hereto neglected by incumbents. In the process, Axia is helping resolve the digital divide in a way that the SuperNet could not, but it is also competing with other approaches to the same problem, such as those currently being pursued through the Calgary Regional Partnership.

The distinction between Axia SuperNet and Axia Connect has kept the company compliant with the terms of the SuperNet contract, but claiming that Axia Connect’s FTTP deployments are “made possible by having access to the SuperNet” doesn’t help the public draw this distinction. Axia’s brand in Alberta is intimately linked to SuperNet, and for the first time, we are forced to consider what a decoupling might look like. This is because the SuperNet contract is once again up for renewal, except this time, Axia is not being granted a simple extension. Even if the company successfully wins the contract for the next term, the government seems to be looking at a “new vision” for the deal.

In short, the situation in Alberta is as follows: The SuperNet is legacy infrastructure, largely built or acquired from existing fibre assets in the early 2000s, and for now it should still be a valuable network with a lot of potential. Observers from other parts of Canada have sometimes looked at it with envy, but the project’s history has been troubled, and SuperNet has only achieved part of its original vision. The existing (and “increasingly-out-of-date“) contract expires in June 2018, with a decision on SuperNet 2.0 expected soon, and Axia, Bell, TELUS, and Zayo competing for the contract. Will a traditional incumbent become the government’s private sector partner? How messy would a transfer or responsibilities from Axia be, should the company lose the bid? If Axia wins, how will the deal be restructured to address the shortcomings of SuperNet 1.0? These are the big questions right now.

Meanwhile, broadband is a hot topic in rural Alberta, with active regional discussions, like an upcoming Digital Futures Symposium in Cochrane, the related Alberta Broadband Toolkit, municipal collaboration through the Calgary Regional Partnership, and broadband studies being carried out by the REDAs. TELUS has also been active with fibre upgrades, and there is a “land grab” underway as rural communities examine competing models of connectivity and decide how best to meet their needs.  Some communities are trying to convince Axia Connect to build them a local network (by demonstrating there are enough interested subscribers), while others are collaborating on a middle-mile backhaul option (skipping the SuperNet), or considering investing in a publicly-owned last-mile network (usually a choice between dark fibre, lit fibre, and wireless). It’s hardly a broadband gold rush out there in rural Alberta, but this is the most exciting I’ve seen it since I started paying attention several years ago.

Lots of dimensions here left to cover, and new developments expected. More Alberta explorations and updates to follow!




Universal Broadband


Should all Canadians have access to broadband? The answer these days is almost invariably yes, but the more specific questions that follow are: How do we connect those without access (whose responsibility is it, who should pay for it), and what counts as broadband anyway?

The latter question results in different definitions or ‘targets’ for connectivity, most often as upload/download speeds, which can be mandated (hard targets) or ‘aspirational’ (soft targets). These targets often lag behind how people actually use the internet, presuming some ‘basic’ form of connectivity that doesn’t involve streaming media or uploads. The CRTC just revised such a target, from 2011’s measly 1 Mbps up and 5 Mbps down, to ten times that (10 & 50 Mbps), under the rationale that this level of connectivity is currently vital for Canadians. This is also presented as a forward-looking approach for a gigabit world, since the CRTC asserts that “the network infrastructure capable of providing those speeds is generally scalable, meaning that it can support download and upload speeds of up to 1 Gbps“.

The CRTC’s revised broadband target was the result of the basic service hearings (see previous post), which also led to a number of other decisions within a new regulatory policy (2016-496). These include forthcoming targets for latency, jitter, and packet loss, a new funding mechanism for extending broadband networks, and accessibility requirements for Canadians with disabilities. But while the specifics of these policies are important, the broader shift that has taken place was signaled by Chairman Blais’ decision to interrupt the hearings with a statement about just how vital broadband has become for Canadian “economic, social, democratic and cultural success”. This sentiment is echoed in the newly-written policy — Canadians require broadband to participate in society, even if this society tends to be characterized as a “digital economy”, with “social, democratic and cultural” dimensions getting less emphasis. Still, around twenty years after the arrival of the public (commercial) internet in Canada, the CRTC has finally declared that broadband is a vital need for all, and not some optional luxury.

All of this has happened in the same regulatory policy that signals a movement away from what was once considered a vital need for society — universal telephone access. In today’s world, differentiating digital networks from POTS (plain old telephone service) is increasingly pointless, but the CRTC’s decision works to “shift the focus of its regulatory frameworks from wireline voice services to broadband Internet access services“, creating a new “universal service objective” for broadband. 

Universal telephone service was a great twentieth-century achievement in Canada,  although there seems to be some controversy among telecom policy folks whether this resulted from regulation or the initiative of private industry. Positions on the matter seem to depend on whether one wants to credit industry or public policy, because for nearly all of the twentieth century (particularly since 1905) the two are hard to distinguish. Whether it was formalized or not, universal service (achieved by using urban networks to subsidize rural ones) was a key pillar of the monopoly era. Once the telephone ceased to be a luxury good, telephone companies were expected to honor the principle of universalism, and extending twisted copper to every home became part of the great nation-building project. However, the internet arrived at the close of the monopoly era, and the old telephone network was inadequate for what we would consider to be broadband today. As with telephony, internet access was initially seen as a luxury. Now that it is basic and vital, the existence of populations without access to broadband is a problem that cannot be ignored.

As I predicted in my previous post, the CRTC had to act in a way that at least appeared significant on this issue, but was unlikely to carve out a new leadership role for itself. Indeed, the Commission used its new policy and a related government consultation to once again urge the creation of a new digital strategy, and avoided getting involved in some major connectivity challenges that traditionally have not been its concern. Specifically, it was good to hear the CRTC acknowledge that access is not simply a matter of infrastructure, since there are many people in Canada who have ‘access’ to broadband, but do not use it effectively because they can not afford to or do not know how. But on the question of affordability, the CRTC stated that it doesn’t set retail prices, and instead works to promote competition (namely, through regulating wholesale access). There are some other organizations (including Rogers, TELUS, and not-for-profits) helping provide access to low-income populations, and the CRTC “does not want to take regulatory action that would inadvertently hinder the development of further private and public sector initiatives“. Similarly, while digital literacy is an acknowledged “gap”, addressing it “is not within the Commission’s core mandate. Multiple stakeholders are involved in the digital literacy domain, and additional coordination among these stakeholders is necessary to address this gap.

And so, we have a new universal service objective for broadband in Canada, we will soon have a new pot of money that can be awarded to companies to work towards it, but on the bigger issues of connectivity and digital policy, we are still waiting for coherence.