Digital Futures in Alberta

This post will offer some reflections on the Digital Futures Symposium on broadband, held March 16 & 17 in Cochrane, Alberta, and updates on Alberta’s SuperNet.

I attended the first Digital Futures Symposium in Calgary in 2013, which turned out to be a great opportunity to learn about topics that I was becoming very interested in, like the SuperNet and the work that was underway to turn Olds into a gigabit community. At the end of that event, it was evident that there was a lot of frustration in rural Alberta over inadequate connectivity, but there wasn’t much going on to address this frustration. The organizers (academics with the Van Horne Institute) and some of the participants expressed a desire to keep working on these issues through some sort of ongoing collaboration. While I had my doubts about what this would produce, three and a half years later Digital Futures has become more relevant and useful than ever, and it is now just one of numerous efforts around to province to collaborate on rural broadband.

The group of academics organizing the Symposium has seen some change of personnel (one of the original Professors from Van Horne is now CRTC Commissioner Linda Vennard, who visited and participated in that capacity), and each of the meetings sees new faces coming with their own local concerns and questions. This latest Digital Futures was attended by some of the actors that were notably missing in 2013 — TELUS was one of the sponsors, and was there to make clear that it was very interested in working to meet the needs of local communities (some people were recently doubting the incumbent’s interest in rural Alberta). Axia, an original sponsor, also came with a substantial delegation.

Digital Futures hosts an interesting mix of municipal and regional leaders, and now also gets more attention from provincial government (at the federal level, there was also an ISED policy presentation). For me, the most interesting presentation was by Stephen Bull — Service Alberta’s Assistant Deputy Minister for the SuperNet Secretariat. By the sounds of it, Bull has made a significant impression in his first year on the job, and at last week’s Digital Futures he provided some important statements about SuperNet, at a time when the future of the network is at an important juncture (see previous post).

photo credit: @barbcarra

As previously mentioned, people in charge of SuperNet tend to spend a lot of time countering misconceptions about it, and so Stephen Bull’s presentation was organized around a series of SuperNet “myths” (a very different set than those addressed by Axia). Here are some of the most interesting bits from the presentation about where things currently stand:

-The SuperNet contract will be decided before the end of the summer. Axia, TELUS and Bell are pre-approved to submit for an RFP, but it sounds like the Government of Alberta (GoA) is still figuring out what it wants. A key question is what role different actors are going to play (local champions, different levels of government, the “ISP community”). The Premier has had one briefing on the issue, but asked for a second one — so this file has her attention, and things seem pretty wide open.

-What does the GoA think about SuperNet as it currently exists? Well, according to Stephen Bull, the primary rationale for the network (connecting public facilities) was achieved, but last-mile connections for rural properties are a big outstanding issue. Service Alberta counts 36 ISPs in Alberta (others at the Symposium counted 38-40), but that doesn’t mean there is last-mile competition across the province, and we should be realistic about what market forces can achieve (“Myth #6: The private sector will solve this issue”). SuperNet 2.0 seems like it will continue to have the goal of improving connectivity beyond public institutions, but Service Alberta seems aware that this has been a key weakness of SuperNet 1.0, and wants to improve how ISPs (including new, community-owned ISPs) connect.

-Stephen Bull didn’t have very positive things to say about the existing SuperNet contract, and provided some fascinating background about how Bell and the GoA’s interests were negotiated in 2005. The result was a poorly-written contract that’s open to interpretation, provides few enforcement options for the government, and isn’t clear on the roles of the different parties. Presumably, even if the fundamentals of the relationship stay the same (with Axia maintaining its role as operator), these issues will be cleared up.

-One big question is just what the current state of the network is, and a government audit is underway to figure this out. In 2035 the GoA has the option to buy the Extended Area Network (currently run by Axia) for $1, but what exactly would they be buying? It wouldn’t a singular network, because a lot of it is composed of leased fibre lines. Also, there are old electronics in need of repair (maintenance costs, previously covered by Bell, are actually a big part of the reason for revisiting the contract).

Some of the rural SuperNet infrastructure that the GoA will have the opportunity to buy for $1 in 2035

-The advice for communities is to “think very carefully before entering into any long-term agreement with an ISP before the future of SuperNet is known”. The worst-case scenario is a well-connected community that goes dark because something happens to the ISP (the advice being to include an option for a community to buy the infrastructure if an ISP leaves a jurisdiction).

-Finally, Stephen Bull expressed his perceived need for a provincial broadband strategy, which in his view would require finding a Ministry with the funding, capacity and will to do it (this is not the Service Alberta mandate). If no one is willing to take the lead on this at the GoA, folks at the Symposium wondered if we could produce something more bottom-up, and get the GoA’s blessing. Some of this is now being coordinated through the Van Horne Institute, and will be the next step for some Digital Futures participants.

Big-picture takeaways from the Cochrane Symposium:

In the two years or so since I attended Digital Futures 2015, broadband issues have exploded across rural Alberta. This hasn’t been uniform by any means (things are moving faster in the South than in the North), but whereas a couple of years ago it was tough to get many local governments to take the issue seriously, now councils have generally recognized how vital broadband is, and many are trying to improve connectivity. They’re working in partnerships with each other or making their own industry deals, principally with Axia. For a lot of regions and communities doing this, an early step is to get a consultant to tell them what the options are, and from the sounds of it Craig Dobson from Taylor Warwick has been winning the contracts for most of this work. A lot of rural communities are still at this research stage, but the hurdle of convincing rural governments that the internet matters has mostly been overcome.

What’s striking is the diversity of approaches to connectivity that are being discussed, although many of these exist only in potential. To paraphrase Lloyd Kearl (from Cardston Country and AlbertaSW), public solutions take time: you have to engage with citizens and various political, as well as commercial organizations. Private industry can move quickly, and indeed TELUS and Axia have been busy putting fibre in the ground, while public bodies deliberate taking a more active role in providing connectivity (the story of Olds is ever-present in these deliberations). In the next few years, we will see what these alternate approaches to connectivity in rural Alberta will amount to. In the short term, the big question is still what will happen with SuperNet 2.0…


Alberta’s SuperNet

Alberta is home to a remarkable fibre-optic network called the SuperNet, and the provincial government is about to decide what to do with it. This post will briefly summarize how this situation came to be, and what’s at stake in the forthcoming decision about “SuperNet 2.0“.

Just a slice of the SuperNet

At the end of the 1990s, Alberta was riding high on oil revenues and the promise of internet-enabled prosperity. The provincial government decided to invest in a network that would connect government and public buildings (such as schools and medical facilities) across the province. The need for public sector connectivity was combined with the need for rural internet access, and the idea was that last-mile ISPs would be able to plug into the SuperNet as a middle-mile network to reach towns and villages across the province. Economic development would be extended beyond the cities, bridging the digital divide. In those heady days, there was talk of luring Silicon Valley businesses, like Microsoft or Cisco, to rural Alberta. Entrepreneurs and knowledge workers would set up shop in small towns, rural patients could be diagnosed through telehealth, and university lectures could be beamed into remote schools.

The 2000s followed a decade of telecom liberalization and provincial privatization, including privatization of telecom assets (AGT), so the last thing the provincial government wanted was a publicly-owned network. Science and Technology Minister Lorne Taylor (credited with leading the SuperNet’s development) made clear that running telecom networks was the business of private industry, not government. The CTO of Alberta Innovation and Science emphasized that it was definitely not a government network. Government wasn’t going to build it, wouldn’t own it, and wouldn’t manage it. The private sector would be unleashed and competition would take care of the rest. All government had to do was throw in $200 million and set the terms of the deal.

As Nadine Kozak writes, the SuperNet was a contract, and not public policy. The contract was signed without public input or legislative debate. Citizens would be consumers of the network, and didn’t need to know the details of the deal, which was complicated and confidential. The contract would have to be renegotiated after construction fell behind and private sector partners Bell and Axia had a legal fight about not living up to their respective terms. The network was eventually completed without fanfare in 2005, with Bell eating the additional costs of the delay. Following another renegotiation of the contract in 2005, Axia would run the SuperNet for thirteen years (including the three-year extension granted in 2013), and the government would have the option of assuming ownership of the rural network after thirty.

Public infrastructure in many rural communities did receive a considerable boost in connectivity thanks to SuperNet, but the province never did become Silicon Valley North, and the last mile of the network only extended to public sector clients. It was imagined that private ISPs would connect to the network and compete with each other over the last mile for residential and business customers (see below), but in much of rural Alberta this never happened. Local incumbent TELUS preferred to use its own network, even choosing to (over)build additional facilities in places where it would have been cheaper to use SuperNet.

Meanwhile, government responsibility for the network shifted or split between departments through successive reorganizations. In 2010, Premier Redford stated, “We haven’t focused on it as a priority … (It) seems to have been more of a problem between government departments not wanting to take ownership, or not knowing exactly who’s the leader”. For those who don’t have to deal with it directly, SuperNet is just another piece of the invisible infrastructure that keeps our world running, and today, most Albertans have never heard of it.

Cybera CEO Robin Winsor shows the CRTC a piece of the SuperNet – Nov. 24, 2014

There are also some people who know about SuperNet, but don’t have entirely positive things to say about it. Robin Winsor, head of Cybera, stated that “although many good things have come from the build of the SuperNet, its capacity has been vastly under-realized and under-utilized“. Axia, the company that operates the network, has long worked to counter widespread “misconceptions” about the SuperNet, like the “myth” that the network is expensive and difficult to access. Axia has often ended up as the face of the network and the target of many of these complaints, even through in many cases the faults lie in the design and execution of the SuperNet contract, for which provincial governments have been ultimately responsible.

Axia is a remarkable company in the Canadian telecom industry,  and the SuperNet contract was key to making it what it is today. Axia has since promoted or developed similar open-access fibre networks in several countries, but seems to have recently re-focused on Alberta. When it comes to the SuperNet, its prime responsibility has been to run the network (as Axia SuperNet Ltd.). In this capacity, Axia serves public sector clients, and acts as an “operator-of-operators” for ISPs wishing to connect to SuperNet for backhaul. In line with the principles of running an open-access network, Axia is not supposed to compete with the last-mile ISPs, or offer internet access to residential and business clients through SuperNet. Axia has also helped produce lots promotional content over the years about the SuperNet’s accomplishments and the “unlimited possibilities” offered by this totally amazing network.

On the other hand, Axia’s actions indicate that the company clearly recognizes the limitations of SuperNet, and has worked to address these through Axia Connect Ltd., a separate business endeavour from Axia SuperNet Ltd. (see this recent CRTC appearance by CEO Art Price on the distinction). What Axia SuperNet Ltd. cannot legally do (act as a last-mile ISP), Axia Connect can and does. Whereas Axia SuperNet Ltd. does not compete with private industry in the last mile, Axia Connect has been putting many millions of dollars into last-mile connections, focusing its efforts on deploying FTTP to parts of Alberta hereto neglected by incumbents. In the process, Axia is helping resolve the digital divide in a way that the SuperNet could not, but it is also competing with other approaches to the same problem, such as those currently being pursued through the Calgary Regional Partnership.

The distinction between Axia SuperNet and Axia Connect has kept the company compliant with the terms of the SuperNet contract, but claiming that Axia Connect’s FTTP deployments are “made possible by having access to the SuperNet” doesn’t help the public draw this distinction. Axia’s brand in Alberta is intimately linked to SuperNet, and for the first time, we are forced to consider what a decoupling might look like. This is because the SuperNet contract is once again up for renewal, except this time, Axia is not being granted a simple extension. Even if the company successfully wins the contract for the next term, the government seems to be looking at a “new vision” for the deal.

In short, the situation in Alberta is as follows: The SuperNet is legacy infrastructure, largely built or acquired from existing fibre assets in the early 2000s, and for now it should still be a valuable network with a lot of potential. Observers from other parts of Canada have sometimes looked at it with envy, but the project’s history has been troubled, and SuperNet has only achieved part of its original vision. The existing (and “increasingly-out-of-date“) contract expires in June 2018, with a decision on SuperNet 2.0 expected soon, and Axia, Bell, TELUS, and Zayo competing for the contract. Will a traditional incumbent become the government’s private sector partner? How messy would a transfer or responsibilities from Axia be, should the company lose the bid? If Axia wins, how will the deal be restructured to address the shortcomings of SuperNet 1.0? These are the big questions right now.

Meanwhile, broadband is a hot topic in rural Alberta, with active regional discussions, like an upcoming Digital Futures Symposium in Cochrane, the related Alberta Broadband Toolkit, municipal collaboration through the Calgary Regional Partnership, and broadband studies being carried out by the REDAs. TELUS has also been active with fibre upgrades, and there is a “land grab” underway as rural communities examine competing models of connectivity and decide how best to meet their needs.  Some communities are trying to convince Axia Connect to build them a local network (by demonstrating there are enough interested subscribers), while others are collaborating on a middle-mile backhaul option (skipping the SuperNet), or considering investing in a publicly-owned last-mile network (usually a choice between dark fibre, lit fibre, and wireless). It’s hardly a broadband gold rush out there in rural Alberta, but this is the most exciting I’ve seen it since I started paying attention several years ago.

Lots of dimensions here left to cover, and new developments expected. More Alberta explorations and updates to follow!




Universal Broadband


Should all Canadians have access to broadband? The answer these days is almost invariably yes, but the more specific questions that follow are: How do we connect those without access (whose responsibility is it, who should pay for it), and what counts as broadband anyway?

The latter question results in different definitions or ‘targets’ for connectivity, most often as upload/download speeds, which can be mandated (hard targets) or ‘aspirational’ (soft targets). These targets often lag behind how people actually use the internet, presuming some ‘basic’ form of connectivity that doesn’t involve streaming media or uploads. The CRTC just revised such a target, from 2011’s measly 1 Mbps up and 5 Mbps down, to ten times that (10 & 50 Mbps), under the rationale that this level of connectivity is currently vital for Canadians. This is also presented as a forward-looking approach for a gigabit world, since the CRTC asserts that “the network infrastructure capable of providing those speeds is generally scalable, meaning that it can support download and upload speeds of up to 1 Gbps“.

The CRTC’s revised broadband target was the result of the basic service hearings (see previous post), which also led to a number of other decisions within a new regulatory policy (2016-496). These include forthcoming targets for latency, jitter, and packet loss, a new funding mechanism for extending broadband networks, and accessibility requirements for Canadians with disabilities. But while the specifics of these policies are important, the broader shift that has taken place was signaled by Chairman Blais’ decision to interrupt the hearings with a statement about just how vital broadband has become for Canadian “economic, social, democratic and cultural success”. This sentiment is echoed in the newly-written policy — Canadians require broadband to participate in society, even if this society tends to be characterized as a “digital economy”, with “social, democratic and cultural” dimensions getting less emphasis. Still, around twenty years after the arrival of the public (commercial) internet in Canada, the CRTC has finally declared that broadband is a vital need for all, and not some optional luxury.

All of this has happened in the same regulatory policy that signals a movement away from what was once considered a vital need for society — universal telephone access. In today’s world, differentiating digital networks from POTS (plain old telephone service) is increasingly pointless, but the CRTC’s decision works to “shift the focus of its regulatory frameworks from wireline voice services to broadband Internet access services“, creating a new “universal service objective” for broadband. 

Universal telephone service was a great twentieth-century achievement in Canada,  although there seems to be some controversy among telecom policy folks whether this resulted from regulation or the initiative of private industry. Positions on the matter seem to depend on whether one wants to credit industry or public policy, because for nearly all of the twentieth century (particularly since 1905) the two are hard to distinguish. Whether it was formalized or not, universal service (achieved by using urban networks to subsidize rural ones) was a key pillar of the monopoly era. Once the telephone ceased to be a luxury good, telephone companies were expected to honor the principle of universalism, and extending twisted copper to every home became part of the great nation-building project. However, the internet arrived at the close of the monopoly era, and the old telephone network was inadequate for what we would consider to be broadband today. As with telephony, internet access was initially seen as a luxury. Now that it is basic and vital, the existence of populations without access to broadband is a problem that cannot be ignored.

As I predicted in my previous post, the CRTC had to act in a way that at least appeared significant on this issue, but was unlikely to carve out a new leadership role for itself. Indeed, the Commission used its new policy and a related government consultation to once again urge the creation of a new digital strategy, and avoided getting involved in some major connectivity challenges that traditionally have not been its concern. Specifically, it was good to hear the CRTC acknowledge that access is not simply a matter of infrastructure, since there are many people in Canada who have ‘access’ to broadband, but do not use it effectively because they can not afford to or do not know how. But on the question of affordability, the CRTC stated that it doesn’t set retail prices, and instead works to promote competition (namely, through regulating wholesale access). There are some other organizations (including Rogers, TELUS, and not-for-profits) helping provide access to low-income populations, and the CRTC “does not want to take regulatory action that would inadvertently hinder the development of further private and public sector initiatives“. Similarly, while digital literacy is an acknowledged “gap”, addressing it “is not within the Commission’s core mandate. Multiple stakeholders are involved in the digital literacy domain, and additional coordination among these stakeholders is necessary to address this gap.

And so, we have a new universal service objective for broadband in Canada, we will soon have a new pot of money that can be awarded to companies to work towards it, but on the bigger issues of connectivity and digital policy, we are still waiting for coherence.


Lawful Access Consultation 2016

Another federal government consultation has recently wrapped up, this time with Public Safety asking about national security. Like other ongoing consultations, this one was criticized (for example, by Christopher Parsons and  Tamir Israel) as framing the policy issue in a way that the government prefers, and trying to legitimate some ideas that should have been discredited by now. I would say that the consultation framed the issue very much as Public Safety (for instance, the RCMP) would prefer, repeating old rationales, and seeing the world from a perspective where the ability to exercise sovereign will over information flows is paramount. The Green Paper provided for background reading foregrounds the concerns of law enforcement & security agencies, is peppered with the words “must” and “should”, advancing some dubious assumptions. Public Safety asked for feedback on terrorism-related provisions (including C-51), oversight, intelligence as evidence, and lawful access. The last of these has seen a number of previous consultations, but is back in the news as police make their case for the issue of “going dark” (which has become part of the RCMP’s “new public narrative” for a set of concerns that were once broadly talked about as lawful access).

I let this one get away from me, so I didn’t have anything ready for Dec. 15 when the online submission closed. Regardless, I’ve decided to complete most of the questions related to the topic of Investigative Capabilities in a Digital World as a blog post. I don’t feel particularly bad for missing the deadline, since several of these questions border on ridiculous. For a true public consultation on what has long been a very contentious issue, it would be important for the questions to be informed by the arguments on both sides. Privacy experts would have asked very different questions about privacy and state power, and on a number of topics Public Safety seems to be trying to avoid mentioning the specific policies that are at stake here.

How can the Government address challenges to law enforcement and national security investigations posed by the evolving technological landscape in a manner that is consistent with Canadian values, including respect for privacy, provision of security and the protection of economic interests?

When I think of Canadian values, “privacy, provision of security and the protection of economic interests” are not what come to mind. When I ask my students what they associate with Canada, these particular values have never come up in an answer. I think we should consider democracy as a fundamental value, and understand that state secrecy is antithetical to democracy. When it comes to the relationship between citizens and the state, Canadian values are enshrined in the Charter, and the Supreme Court is ultimately responsible for interpreting what is consistent with the Charter. Therefore, Canadians deserve to understand what is being done in their name if we are to have a meaningful democracy, and this includes the existence of an informed, independent judiciary to decide what government actions are consistent with Canadian values.

In the physical world, if the police obtain a search warrant from a judge to enter your home to conduct an investigation, they are authorized to access your home. Should investigative agencies operate any differently in the digital world?

If we accept the digital/physical distinction, the answer is a definite yes — investigations carried out today operate differently than they did in the simpler, more “physical” 1980s. But it is important to keep in mind that analogies between the digital and physical environment can be misleading and dangerous. When it comes to the “digital world”, I prefer to talk about it in digital terms. The stakes are different, as are the meaning of terms like “to enter”. If we must make these comparisons, here is what treating these two “worlds” as analogous would mean:
The police can enter my home with authorization, and seize my computer with authorization. I am not required to make my computer insecure enough for the police to easily access, just as I am not required to keep my home insecure enough for the police to easily access. I am not required to help the police with a search of my home, and so I should not be required to help police search my computer. If I have a safe with a combination lock in my home, I cannot be compelled by police to divulge the combination, so by analogy, I should not be compelled to divulge a password for an encrypted disk.

But analogies can only take us so far. A computer is not a home. Metadata is not like the address on a physical envelope. We need to understand digital information in its own terms. To that end, some of the more specific questions found further in this consultation can produce more helpful answers. Before we get to these however, this consultation requires me to answer a couple more questions based on the presumption of digital dualism.

This question is hard to answer without knowing what it means to “update these tools”, and seems to be intended to produce a “yes” response to a vague statement. Once again, digital/physical comparisons confuse more than they clarify — these are not separate worlds when we are talking about production orders and mandating the installation of hardware. We can talk about these topics in their own terms, and take up these topics one at a time (see further below).

If we could only get at the bad guys in the digital world, but there's all this code in the way!
If we could only get at the bad guys in the digital world, but there’s all this code in the way!

Is your expectation of privacy different in the digital world than in the physical world?

My answer to this question has to be both yes and no.

No, because I fundamentally reject the notion that these are separate worlds. I do not somehow enter the “digital world” when I check my phone messages, or when I interact with the many digitally-networked physical devices that are part of my lived reality. Privacy law should not be based on trying to find a digital equivalent for the trunk of a car, because no such thing exists.

Yes, expectations of privacy differ when it comes to “informational privacy” (the language of Spencer), because the privacy implications of digital information need to be considered in their own terms. Governments and public servants do Canadians a disservice with phonebook analogies, license plate analogies, or when they hold up envelopes to explain how unconcerned we should be about government access to metadata (all recurring arguments in the surveillance/privacy debate). In many cases, the privacy implications of access to digital information are much more significant than anything we could imagine in a world without digital networks and databases of our digital records.

Basic Subscriber Information (BSI)


As the Green Paper states, nothing in the Spencer decision prevents access to BSI in emergencies, so throwing exigent circumstances into the question confuses the issue, and once again seems designed to elicit a particular response that would be favorable to police and security agencies. In the other examples, “timely and efficient” is the problem. Agencies understandably want quicker and easier access to personal information. The Spencer decision has made this access more difficult, but any new law would still ultimately have to contend with Spencer. Government, police, and security agencies seem to be in a state of denial over this, but barring another Supreme Court decision there is no going back to a world where the disclosure of “basic” metadata avoids section 8 of the Charter, or where private companies can voluntarily hand over various kinds of personal information to police without fear of liability.
If the process of getting a court order is more onerous than police would like, because it would be easier to carry out preliminary investigations under a lesser standard, it is not the job of government to find ways to circumvent the courts. If the process takes too long, there are ways to grant the police or the courts more resources to make it more efficient.
There are ways to improve the ability of police to access metadata without violating the Charter, but any changes to the existing disclosure regime need to be accompanied by robust accountability mechanisms. Previous lawful access legislation (Bill C-30) was flawed, but it at least included such accountability measures. In their absence, we only know that in a pre-Spencer world, police and government agencies sought access to Canadian personal information well over a million times a year without a court order, and that a single court order can lead to the secret disclosure of personal information about thousands of Canadians. Police and security agencies have consistently advocated for these powers, but failed to document and disclose how they actually use them. This needs to change, and the fear of disclosing investigative techniques cannot be used to prevent an informed discussion about the appropriateness of these techniques in a democratic society.
Do you consider your basic identifying information identified through BSI (such as name, home address, phone number and email address) to be as private as the contents of your emails? your personal diary? your financial records? your medical records? Why or why not? 
The answer to this question depends on an exhaustive list of what counts as BSI. It is important to have a clear definition of what counts as BSI, because otherwise we might be back in the pre-Spencer postion where police are able to gain warantless access to somebody’s password using powers that were meant for “basic identifying information”.
The answer to this question also depends on an explanation of what is done with this “basic” information. As was recognized in Spencer, we can no longer consider the privacy impact of a piece of personal information in isolation. This is how lawful access advocates prefer to frame the question, but this is not how investigations work in practice. BSI is useful only in combination with other information, and if we are talking about metadata (a term that curiously, never appears in the Green Paper) it is now increasingly-understood that metadata can be far more revealing than the content of a personal communication, when it is used identify people in large datasets, determine relationships between individuals, and patterns of life.
So in short, yes — I am very concerned about BSI disclosures, particularly when I don’t know what counts as BSI, and what is being done with this information.
Do you see a difference between the police having access to your name, home address and phone number, and the police having access to your Internet address, such as your IP address or email address?
I see an enormous difference. As previously discussed, these are not analogous. An IP address is not where you “live” on the internet — it is an identifier that marks interactions carried out through a specific device.

Interception Capability

This is not a question… Yes all of this is true.
Should Canada’s laws help to ensure that consistent interception capabilities are available through domestic communications service provider networks when a court order authorizing interception is granted by the courts?
The key word here is “consistent”, and the question of what standard will be required. It would be very easy for government to impose a standard that large telecom incumbents could meet, but which would be impossible for smaller intermediaries. As things are, the incumbents handle the vast majority of court orders, so I would love to see some recent statistics on problems with ‘less consistent’ intermediaries, particularly if this is a law that might put them out of business.


I think the answer to this has to be never. People cannot be forced to divulge their passwords — in our society they can only be put in prison for very long periods of time. In other cases, assisting with decryption means forcing Apple to break through their own security (which was meant to keep even Apple out), or driving companies out of business unless they make products with weak security. This does not work in a world where a single individual can create an encryption app.

How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?

By doing anything other than mandating insecurity for everyone. The answer cannot be to make technology insecure enough for the state to exploit, because this makes everyone insecure, except for those who use good encryption (which has become too commonplace to stamp out).


The final two questions deal with data retention, a topic I’ll leave for a later time…

Differential Pricing


The CRTC recently concluded its differential pricing (or net neutrality) hearing. If you weren’t glued to CPAC earlier this month, you can check out the transcripts while we wait on the Commission’s decision. Like any regulatory issue before the CRTC, this one has a long history. The hearings included several mentions Canadian Gamers Organization’s complaint against throttling of certain types of traffic associated with gaming, and the related ITMP regime that developed out of complaints that certain peer-to-peer traffic (like BitTorrent) was being throttled. These cases clearly implicated net neutrality, because they involved ISPs treating some kinds of traffic differently than others, making certain applications perform worse. The CRTC took a dim view of this sort of discrimination unless ISPs could justify its necessity. For example, blocking ‘malicious’ traffic (like DDoS) is acceptable under the ITMP regime because the reasons are deemed valid, but torrenting shouldn’t be blocked just because it is sometimes used to infringe copyright. In an alternate world of net neutrality absolutism we might have ended up with a regulatory regime under which all traffic is protected, and ISPs are legally prohibited from mitigating the sorts of DDoS attacks that have been knocking many services offline in recent years. However, most net neutrality advocates would not support such an extreme interpretation. Under the existing regulatory regime, Canadian ISPs can intervene when they can justify the need, but are not generally allowed to give some kinds of traffic preferential treatment over others.

Most recently, the question has been whether the practice of pricing certain types of traffic differently than others amounts to a similar kind of discrimination. For this, we owe a debt to Ben Klass, whose 2013 complaint (while he was an MA student in Manitoba) got the ball rolling. Klass is one of a small (but growing) number of individuals who have participated in a regulatory process that was really designed to serve the institutions that are being regulated (the ISPs). His work is a great example of how a regulatory system that depends on parties coming forward with complaints fails, when the stakeholders (ISPs) who are meant to come forward don’t want to complain, even though the issue is of public policy importance. Differential pricing is clearly an important public policy debate to have, and the CRTC has recognized as much with the recent hearings.

While an ISP may treat traffic related to Netflix, YouTube, and CraveTV the same way, if two of these services count against a subscriber’s data cap while one does not, then that is a form of differential pricing (known as zero-rating). I may be able to watch Netflix or YouTube without buffering, but if an ISP makes Netflix zero-rated, I will end up paying more at the end of the month if I watch YouTube and exceed my cap. In this example, distinctions are being made about traffic passing through these networks, and they will presumably affect the behaviour of subscribers. The ethical dimensions of these discriminations become clear in situations where ISPs start favoring services in which they have an interest, or when money starts changing hands between companies so that ISPs treat certain applications more favorably than others. Instead of blocking content, an ISP might simply make it unaffordable, with roughly the same effect.

Many ISPs (large and small) have argued that these policies are not nefarious attempts to control subscriber behaviour, but are all about offering choice to consumers, and differentiating themselves from their competition. Some have continued to claim these discriminations are about managing network congestion (much like the old rationale for throttling BitTorrent), but this argument took a beating at the CRTC hearings and isn’t likely to be very convincing. There are good business reasons why you might want to offer customers different options, including unlimited use of a particular app. However, if an ISP is concerned about the amount bandwidth people are using, zero-rating certain services and imposing caps on the rest seems like a silly way to address the problem.

The CRTC’s forthcoming decision has to grapple with some tough questions, and some easy ones. Vertically-integrated companies using internet pricing to discriminate against competing services has analogies with common carriage in the railway/telegraph era, and feels like the sort of unjust discrimination the CRTC is meant to prevent. But if we are going to accept the existence of data caps (and not everyone agrees we should) then should it be a matter of principle to subject all traffic to the cap? If we can discriminate against malware, maybe we can discriminate in favor of security updates, by zero-rating them, or zero-rate access to essential government services. Without data caps, these become non-issues, but a world without caps would have its own issues (which wireless and satellite providers are well aware of).

It’s times like these I don’t envy the regulator’s job.

Finally, we should remember that differential pricing, just like interventions against malicious traffic, presumes monitoring to accurately distinguish different applications and data usage. The ISP does not need to know exactly what subscribers are doing online, but it needs to be able to tell when subscribers are using a zero-rated service. Unless the ISP is somehow relying on the app to provide this information, this means using DPI technology to inspect and categorize traffic. For better or worse, differential pricing is part of the process of intermediation, in which ISPs play a growing and more refined role in governing our digital flows.

Decision Time for the Future of Canadian Content

digicanconWe’re still in the middle of public consultations on what seems like every domain of policy for the federal government, and that includes cultural policy. Canadian culture has a fascinating history, particularly as seen through various efforts over the years to shape, manage, and protect it. Before the Second World War, English Canada’s cultural identity was lodged firmly in the British Empire, and efforts to shape culture were targeted at groups who didn’t fit the mold, like First Nations. During and after the war, the state became involved in creating a national identity and a distinctly Canadian national culture, independent of Britain, and often in opposition to the cultural threat posed by the media industries of the United States. A variety of institutions were directed to this task, including the NFB, CBC, Canada Council for the Arts, CRTC, and the complex set of agencies that administer what we might call the Canadian content (CanCon) regime (though calling it a regime might suggest more coherence than is actually the case).

By the time the twentieth century ended, and the internet was opened to the public, efforts to actively shape Canadian culture into some prescribed form had largely been abandoned. Instead of creating a particular national identity, or telling a national narrative, the concern shifted to supporting CanCon creators and ‘telling Canadian stories’, whether those stories were Trailer Park Boys or Anne of Green Gables. However, this meant that justifications for government’s role in promoting Canadian culture were often on fairly thin grounds — attracting or retaining cultural industries, or making sure the characters we saw in popular culture were ones we could relate to. In the 1990s, sweeping discussions of internet policy (such as the Information Highway Advisory Council reports) were still based on the assumptions of cultural protectionism — that we should find ways to promote and protect Canadian culture on the “information highway”, since international media flows were a threat to cultural sovereignty.  In the end (and unsurprisingly given its composition), IHAC ended up divided on this topic, and didn’t support an ISP tax or any radical measures like a cultural firewall in its final report. Subsequent discussions of online cultural policy have been more limited, including debates over tariffs, copyright legislation or how the CRTC should classify over-the-top services like Netflix.

Online services have largely avoided being subject to CanCon regulation, and according to Canada’s pollsters, the country’s population isn’t keen to change this (and this is especially true for people under 35). We’ve gotten quite used to our tax-free Netflix, and extending the scope of internet regulation is not an easy sell. I think many Canadians have no idea this whole world of content regulation even exists — I certainly had no clue until I started volunteering at the campus radio station, and even then it’s not like anyone sat me down to explain the purpose of this system.

Canadian culture, like telecom, is a domain of public policy that is governed by several (sometimes overlapping) federal departments. The Department of Canadian Heritage is largely responsible for culture, but the CRTC  regulates telecom and broadcasting in Canada, administering their respective acts. This puts the Commission in charge of two rather different sets of priorities for what is often the same media infrastructure, one of which includes cultural promotion and protection.

In an age when telecom meant telephones, and broadcasting meant television and radio, the two really did appear to be distinct categories, but that appearance has faded, and so there have been numerous calls to somehow revise or consolidate these mandates. As former CRTC Commissioner Denton writes, there is a contradiction between the two statues: “The Broadcasting Act says ‘go forth and discriminate in favour of Canadian programming’. The Telecommunications Act says ‘thou shalt not discriminate among signals except for very good reason'”. One ensures that intermediaries do not give preferential treatment to content without good reason, while the other sustains the privilege of a particular class of content — Canadian content (CanCon). Integrating cultural and telecom policy would be no easy feat, and bring about some dramatic changes depending on what we wanted to prioritize.

CanCon requirements were recently lessened and focused for broadcasters, and there has been no extension of these obligations for online providers. The spectre of such a move (presented by critics as a ‘Netflix tax’) is something politicians have generally fought against in this country rather than championed. In the last federal election Stephen Harper presented himself as just a regular, Netflix-loving dude, and also the only thing standing between Canadians and higher monthly bills.


Under the Harper government, it was clear that the CRTC was powerless to go after Netflix even if they had wanted to. When the company defied the Commission in 2014, all Chairman Blais could do was act upset and offended, with Netflix effectively calling his formal authority a bluff. The Liberals have yet to fulfil the Harper prophecy of regulating Netflix, but Canadian media has recently been open to public consultation (and some quasi-public discussions), with our Heritage Minister declaring that “everything is on the table“.

In general, the government might choose to extend Canadian cultural policy online, or it might pull back the CanCon regime. We might even see a bit of both, but definitely not extending to some sort of cultural firewall around a sovereign Canadian internet. The Government of Canada recognizes that “The way forward is not attempting to regulate content on the Internet“, and consultations are focused on how to support the production of Canadian culture in a “digital world”, with the real questions being who will benefit from this support, and how we are going to fund it.

Personally, I’m not opposed to public funding for culture, but I also don’t see it as a requirement in many cases. I value broadcasters like the CBC for helping me understand Canadian society, primarily through news and documentary rather than dramatic or comedy series (in that respect there isn’t much of CBC TV that I would be sad to see go, while there are still a number of good radio programs). Many kinds of art are not capital-intensive, and will be produced whether or not there are government programs in place. The fact that I came up in a thriving Canadian music culture operating largely independent of copyright or cultural funding no doubt shapes my thinking in this regard (a story I might share another time). The nightmare scenario for me is not about losing out to other cultural markets, fewer jobs in Canada’s cultural industries, or artists no longer being able to sustain careers as they once were. Culture is dynamic, and the most exciting forms come from below rather than top-down. Cultural protectionism makes the most sense if we think of culture as expensive mass media, individuals as consumers of culture, and U.S. cultural industries as a threat to Canada’s cultural sovereignty. But artists will make art everywhere, some of these artists will be Canadian, and we may or may not end up with some sense of Canadian identity as a result.

I am more worried about the possibility that living in an information-rich world will also mean being ignorant about local events, and no one being rewarded for answering the sorts of questions that powerful interests in this country would rather not hear asked. Perhaps a public broadcaster can be well-resourced and independent enough to play this role, but in an ideal world this wouldn’t just be the CBC’s responsibility. I rely on journalism and related media that tell me what is happening in the world in order to actively participate in democracy. I rely on it to do my job (which often involves classroom discussions of Canadian society). I can do without other kinds of CanCon.

I’m also in support of public funding for indigenous cultural programs. For most of Canada’s history the state has tried to eradicate indigenous culture, systematically resocializing children in residential schools, banning ceremonies, leaving behind broken communities and cultural dead-ends. The damage done by this cultural policy is hard to calculate but still ongoing. Its victims include young indigenous people who are unable to situate themselves in Canadian society because it does not speak to them, but also lack a cultural understanding of their own because it has been extinguished in previous generations. Some of this damage is irreversible, but in many cases knowledge, practices, culture can be recovered, preserved, and kept alive. The least a Canadian cultural policy could do is to try to address some of these wrongs and support efforts within First Nations communities to meet these cultural needs.

These are my opinions, so let the government know what you think before November 25. Otherwise, the voices that may be heard loudest are the ones that are most invested in the existing (and often declining) regime of cultural production.

On Standards

I’ve been thinking about standards and telecom, or more specifically, the process of standardization. Recently, I read an article by Timmermans and Epstein that tries to advance a “Sociology of Standards and Standardization“. As the authors explain, there is a great deal of sociology that deals with standards in different domains, or as part of other processes, such as classification, quantification, and regulation. But “relatively few scholars analyze standards directly” (p. 74) — for instance by studying standardization as a social phenomenon.

Drawing significantly from Bowker and Star,  Timmermans and Epstein define standardization as “a process of constructing uniformities across time and space, through the generation of agreed-upon rules… [making] things work together over distance or heterogeneous metrics” (p. 71). Standards can coordinate “people and things in ways that would be difficult to achieve on an ad hoc basis, they may allow communication between incompatible systems, and they may create specific kinds of mobility, uniformity, precision, objectivity, universality, and calculability” (p. 83). While we often aspire to (have) standards, the authors point out how standardization typically carries negative connotations of uniformity and “dull sameness” (p. 71). And yet, we only have to look to the internet to see the vast creativity and heterogeneity that has been enabled through standardization. Diverse networks, systems and devices can communicate with one another because they agree on basic standards and protocols. If the experiences we have through the internet are trending towards uniformity and sameness, this says more about the concentration of power in certain platforms, algorithms, and service providers than standardization.

Timmermans and Epstein’s article doesn’t discuss the internet, but scholars of internet governance have often focused on standards as the internet’s core. Inspired by Deleuze’s (1992) Postscript on the Societies of Control, Galloway’s (2004) Protocol grapples with the contradictions of internet standards being forms of power and control, while also facilitating autonomy, decentralization, and local decision-making. The protocols Galloway is interested in are the “standards governing the implementation of specific technologies” (p. 7), and he holds up DNS as the “most heroic of human projects” (p. 50). Galloway goes some way in advancing social theory on the basis of standards, painting a complex picture, but one in which the “full potential” (p. 122) of protocol is restricted, and channelled instead by law, government and corporate power. He ultimately envisions an even darker future where open-source standards and TCP/IP are replaced by something more proprietary, either under the control of states or a corporation  (which in 2004 is naturally assumed to be Microsoft).

While corporate and state power over internet policy has intensified in the intervening years, and old principles like end-to-end architecture sound increasingly idealistic, the internet’s established standards-making bodies continue their work, and often do so in the open. In general, internet standards are voluntary, and internet protocols work because networks agree to use them. The IETF acts as a key standards-making organization, where individuals (sometimes employees of rival companies) collaborate to develop new proposals for improving how the internet operates. Galloway (2004, p. 122) paints a picture of a “technocratic elite [that] toils away, mostly voluntarily, in an effort to hammer out solutions to advancements in technology”.

ietfThe technocrats of the IETF toiling away

Anyone can participate in this “technocratic elite”, and at the IETF your membership is defined by your participation. But because of the technical understanding required, membership tends to be limited to a particular social class. Meaningful participation also requires a time commitment, and so bodies like the IETF often see the greatest participation from individuals with employers who are willing to support their activities. Organizations can benefit from being part of the standards-making process, but participation in the IETF is on an individual basis (with individuals often disclosing their organizational affiliations).

The IETF produces RFCs, but has no power to compel anyone to adopt these standards. Many are ignored, or (like IPv6, which Laura DeNardis wrote about in 2009) are only slowly implemented long after the problem they are meant to solve is well-known. So the actual making of standards is just one aspect of standardization, and achieving voluntary adoption is actually a bigger challenge, particularly when things seem to work ‘good enough’ as they are.

While government agencies have relatively little impact on internet standards, they do produce various kinds of standards for service providers operating within their territory. These may be voluntary ‘best practices’ or backed by regulatory law. Sometimes companies ‘voluntarily’ standardize their conduct, under threat of government regulation. Standardization can even be pursued through criminal law, as when previous Liberal and Conservative governments in Canada tried to pass lawful access legislation, standardizing surveillance and disclosure responsibilities for intermediaries. More recently, standardization has become a frequently-suggested means of addressing cyber security, but I’ll save these topics for a subsequent post.


Canada’s Cyber Security Seeks Public Input — Here’s Mine

cybThe Government of Canada is carrying out a public consultation on cyber security. Specifically, the consultation is being administered by Public Safety Canada’s National Cyber Security Directorate (NCSD). NCSD’s role is sometimes described as cyber policy and coordination, such as designing and implementing Canada’s Cyber Security Strategy, and the consultation asks for the public’s help in addressing some really thorny cyber security challenges.

On its face, it’s hard to know what to make of this consultation. PSC/NCSD wants to hear from “experts, academics, business leaders, and provincial, territorial and municipal governments” on the topic, but they also want “all citizens to get involved in a discussion about the security and economic dimensions of Canada’s digital future.” There are four main topics the government is consulting on, and a workbook has also been created to accompanies the process. The workbook breaks the consultation down into trends, themes, and related questions for consideration, but the contents seem designed to steer answers in particular directions, and the one topic that doesn’t include any specific questions is Canada’s “way forward”, the outlines of which seem to have already been decided.

Some of the questions in the workbook are ones that I imagine Government would love an innovative answer for (How can public and private sector organizations help protect themselves from cybercrime… and what tools do they need to do so?), while others seem loaded to produce a particular response (with “example” answers provided). I only hope that the responses to this consultation won’t be quantified as statistics (since this isn’t a methodologically-sound survey), or used to support decisions that have already been made. So let’s give them the benefit of the doubt and assume that NCSD really does want some help from Canadians in dealing with one of society’s most important challenges, and they’re open to all sorts of ideas.

To that end, I’ve provided my response to the consultation’s four “topic areas”:

The Evolution of the Cyber Threat
I think a lot of this has been covered in broad strokes by Canada’s Cyber Security Strategy and related documents. The threat has certainly evolved, in terms of actors, motives, and potential harm. State actors are increasingly involved around the world, and there are dedicated industries of criminals profiting from vulnerabilities. The most interesting way that I think the cyber threat has evolved in recent years is a recognition of the Five Eyes (Canada’s alliance with the US, UK, Australia and New Zealand) as a security threat. This recognition has certainly not come from the Canadian government, or even much of the Canadian population (as we really have yet to talk about this issue). Instead, the changing nature of the threat has been expressed most publicly by the likes of Microsoft and Google, after they learned through the Snowden documents that the Five Eyes were compromising their infrastructure and the relationships of trust these companies have established with their users.

The Increasing Economic Significance of Cyber Security
I don’t consider this to be much of a topic in need of public consultation, since it seems like Public Safety is already aware that cyber security is vital to the economy. It’s hard to put a dollar value on security, but it’s pretty obvious that the value of maintaining information security and the “losses” that result from various kinds of threats are enormous. Huge numbers are estimated and cited to justify the need for cyber security,  and I’m not sure that we need more accurate numbers (since we know they’re big), or that bigger numbers will compel action. We can talk about how better to communicate the seriousness of the issue, but I’m more interested in finding perspectives other than the economic lens to talk about threats. Government ideas about the value of the internet in Canada too often lapse into talk of the “digital economy”, and harms that don’t involve children are often expressed in economic terms. As people like Ron Deibert point out, we need to think more about the democratic/political dimensions of cyber security. This means articulating the value of connectivity in a way that doesn’t translate into dollars, but instead relates to our values as Canadians (like those “rights and freedoms” mentioned at the end of the workbook).

The Expanding Frontiers of Cyber Security
While the workbook discusses this in terms of the need for “cyber security [to] evolve at the same rate as new technologies” (p. 17), I want to use this topic to discuss the expanding scope of cyber security.


The workbook defines cyber security as “the protection of digital information and the infrastructure on which it resides. Cyber security addresses the challenges and threats of cyberspace in order to secure the benefits and opportunities of digital life” (p. 5). The first part of this definition is relatively straight-forward, and encompasses the domain of IT security. However, cyber security is not limited to these concerns, and Canada’s closest allies have used the language of cyber security to justify creating and preserving technological vulnerabilities in the service of strategic objectives. Meanwhile, it seems that Public Safety Canada considers “threats of cyberspace” to include more than just threats to digital information and infrastructure.

Internationally, cyber security now includes a variety of concerns, including over public order and morality. For instance, in Canada cyberbullying is sometimes listed as a cyber security threat alongside phishing and malware (particularly in Get Cyber Safe resources). Cyberbullying can certainly involve personal information being compromised, but it can also refer to the hateful and abusive comments found in many online media. The danger is that cyber security can be equated with online “safety”, which can mean safety from content that might insult, harm, or disturb.

The more concerning expansion of cyber security is as a justification for whatever actions serve national security or the priorities of state agencies. This is a worry because the goals of some state “partners” in cyber security are not to provide the public with the most secure technologies. In the US for instance, secret efforts to make commercial technologies (the same technologies widely used by Canadians) more vulnerable and less secure were justified as part of an ostensibly-defensive cyber security program (the CNCI). As discussed below, there is no reason to believe that Canadian agencies are an exception to the same tendencies demonstrated by their closest international allies in cyber security.

One the few things that all cyber security threats have in common is that they all involve a computer, or digital networks. Since we are supposedly moving towards a world covered in networked computers, the potential for cyber security’s expansion is a major cause for concern. I feel a lot more comfortable talking about information (IT), network, or computer security, because at least there the subject matter is relatively defined. Cyber security is more of a mixed bag, and I hope that the Government of Canada will keep the expansionist tendency of cyber security in check. Focus on the threats we know and are having difficulty defending against, don’t go looking for new forms of troublesome conduct involving a computer that can be listed as a cyber security threat, and let’s talk about whether the government’s idea of cyber security includes purposefully maintaining certain kinds of insecurity.

Canada’s Way Forward on Cyber Security
As part of Canada’s way forward, we need to take an explicit position on the extent to which we want to promote information/IT security at the expense of other conceptions of security, particularly those  favored by police and national security agencies. It seems disingenuous to promote the security of information and infrastructure, without acknowledging the limits that government agencies are comfortable allowing such developments. Police in Canada and around the world are well aware of this conflict, particularly after the Snowden revelations led to widespread adoption of more secure technologies, which are now an obstacle to their ability to investigate crime. The recent showdown between Apple and the FBI is a recent manifestation of this tension, and Canada should not simply sit on the sidelines and wait for these new “crypto wars” to play out in the US and Europe.

We also need to discuss our membership in the Five Eyes, because Canadians have never had a real opportunity to do so. Predicated on a secret treaty, the Five Eyes often acts as a coordinated group and an exclusive club, supposedly based on its members’ “common Anglo-Saxon culture, accepted liberal democratic values and complementary national interests”. Originally formed to further intelligence collection and the sharing of information in the interests of national security, today the Five Eyes also includes collaboration of a more defensive nature in the realm of cyber security. We know that Canada’s membership in the Five Eyes can be a privacy threat to Canadians, because of last year’s revelation that CSE had for years violated the law by sharing Canadians’ personal information with these allies. We know that the Five Eyes can pose a security threat to our information infrastructure, because of documents revealed by Edward Snowden showing how the NSA worked to weaken the security of commonly-used systems in order to more easily obtain intelligence (efforts in which Canada appears to have been complicit).

In the US, the Snowden disclosures resulted in the President’s Review Group on Intelligence and Communications Technologies recommending the separation of the NSA’s offensive and defensive roles, through the creation of a new agency to take over the NSA’s defensive “information assurance” mission. Canada has yet to acknowledge the contradiction at the heart the Five Eyes – where government agencies work simultaneously (or at cross-purposes) to both secure infrastructure and make it more vulnerable. In the US, the NSA is currently merging its offensive and defensive capabilities. This NSA reorganization contradicts the recommendations of the President’s Review Group, strains trust with non-government partners, but is at least being openly acknowledged and discussed. In Canada, a similar process of merging offensive and defensive capabilities may very well be underway at CSE, but this is just what we can deduce from five-year old Snowden documents, and the government’s position on this topic is limited to CSE’s statements about the same news story.

Can the Canadian government be a trusted partner in cyber security when it has never even acknowledged its role (or the conduct of its closest allies) in making information infrastructure less secure? Is it permissible to have one cyber security agency (CCIRC) responding to threats and vulnerabilities, some of which may have been created or kept secret by CSE and its Five Eyes allies? These are not hypothetical questions — just last week CCIRC issued an advisory to correct a vulnerability that the NSA had likely exploited for over a decade. If the attributions of security experts are correct, this means that the Canadian public is being notified about a security vulnerability that was kept secret and exploited by our closest cyber security ally, and we are learning about it through foreign actors whose motivations are unknown, but presumably do not include a desire to make our infrastructure more resilient.

Certainly, most Canadians have more to fear from more mundane threats, like phishing, ransomware, and others listed as part of the government’s consultation. But I wanted to focus on the Five Eyes because these are precisely the sorts of blind spots that need to be uncovered through public consultation. If government agencies will not acknowledge this threat, either because of secrecy or the failure to recognize what those outside government perceive, then it becomes the responsibility of Canadians to point out how the government’s version of reality is different than the one we are reading about in the news. However, at that point we are no longer having a shared discussion of cyber security, but two parallel discussions, with very different ideas of what constitutes a cyber threat.

These tensions at the heart of cyber security are not going anywhere, but by acknowledging them, the Government of Canada can at least take an explicit policy position, rather than the implicit one we can deduce from its former conduct. The Government of Canada has already taken the historic step of suspending metadata sharing with the Five Eyes until it is confident that this no longer threatens the privacy of Canadians. Before Canada resumes its full participation in a secretive alliance that works to both strengthen and weaken the security of systems we depend on, we need a stated position on such conduct. Specifically, are security vulnerabilities ever acceptable or desirable? Is it ever appropriate for government agencies such as CSE and the RCMP to use vulnerabilities that might otherwise be disclosed and corrected? What should we do when our closest cyber security allies are repeatedly found exploiting vulnerabilities and weakening security?

In response to the last of these questions, I would answer that Canada needs to either openly declare its support for government efforts to compromise security, including any limits or conditions, or it needs to publicly oppose these efforts. Only by working to strengthen IT security against all threats can the Government of Canada be a trusted partner in cyber security. To take no position at all by failing to acknowledge the issue is untenable, will weaken trust in government, and will continue the post-Snowden bifurcation of security into two separate discussions — one that includes government as a partner and one that does not.

Watching Six Years of the Regulatory Blockbuster


I spent a weekend re-watching videos of the “Regulatory Blockbuster” from the yearly Telecom Summit. The Summit is a major industry get-together, taking place over a couple of days in a Toronto convention center, with presentations, networking and deal-making opportunities for significant players in the telecom industry. I’ve only been able to attend once, but luckily if you’re interested in public policy the most interesting event at the Summit can be streamed on CPAC, where you can watch the last six years of the Regulatory Blockbuster.

The Blockbuster features an hour and a half of telecom industry lawyers (typically from incumbents Bell, TELUS, Rogers, a smaller provider or two such as WIND or TekSavvy, and John Lawford from PIAC) discussing the regulatory issues of the day. In previous years, each participant would get a few minutes at the outset to present what they thought were the most important regulatory topics, followed by questions from the moderator (‘s Greg O’Brien) or the audience. Sometimes the discussion gets a little heated, and it’s worth remembering that the people on the stage can be embroiled in disputes with one another at the CRTC or before the courts. It’s common for participants to point out the self-serving nature of rivals’ arguments, to allege hypocrisy or inconsistency, and to present themselves as disadvantaged victims of government regulation (or lack thereof).

As an observer, it helps to understand the underlying conflicts and regulatory proceedings being discussed. However, even without knowing the nuances of CRTC procedure or regulations, the Blockbuster provides a sense of what kinds of issues are keeping industry lawyers occupied. It’s also an opportunity for participants to air their complaints with the existing regulatory regime.

The Mandated Access Regime

In the previous five years, the dominant issue at the Blockbuster has been how government regulates relationships between competitors in the industry, specifically through mandated access to incumbent facilities and wholesale connectivity. Other regulatory issues come and go as they pass on and off the federal government and CRTC’s agenda — lawful access, spectrum auctions, reviews of basic services. But mandated access has endured and expanded since the late 1990s, causing no end of complaints from both incumbents and the smaller competitors it is intended to benefit. In 2010, participants in the Blockbuster offered some analogies of how we might understand obligations under the regime. One likened it to a system in which airlines must reserve a certain number of seats for passengers of competing airlines, or parcel delivery companies are obliged to deliver the parcels of smaller competitors. In Canadian telecom, these obligations generally mean that the large incumbents (including Rogers, Bell, Shaw, TELUS) must allow “independents” (TekSavvy, Distributel, and many other smaller players) to use incumbent infrastructure and to purchase wholesale connectivity at set rates. These rates are meant to ensure that incumbents can profit from this arrangement, but the result is a system where small providers depend on large providers, and both compete for the same customers.

The conflicts that result are quite predictable. Small players argue that wholesale rates are too high for them to compete or expand their business, while large players argue the rates are just right, too low, or that mandated wholesale should be eliminated. Because the so-called independents are actually highly dependent on incumbent infrastructure, they must rely on their larger competitors to connect customers and resolve technical issues, such as network outages. Incumbents are therefore obliged to help their smaller competitors address customer concerns, and complainants at the CRTC have argued that incumbents treat competitors’ customers differently than their own.

From the outside, the whole setup looks ridiculous — as if it was designed to impose contradictory pressures and inevitable conflict amongst industry players (as well as endless proceedings before the CRTC). But to understand this regulatory regime, we need to consider that it was intended as a temporary framework to deliver us to the mythical land of facilities-based competition.

This image from the City of Calgary's November 28, 2014 presentation to the CRTC speculates what a future of competing fibre facilities would look like
This image from the City of Calgary’s November 28, 2014 presentation to the CRTC speculates what a future of competing fibre facilities would look like

Facilities-based competition remains a myth because the world it envisions has never been clearly spelled out. Instead, facilities-based competition reflects both the persistent drive to create something resembling a competitive market in Canadian telecom following the monopoly era, and a rejection of the sort of structural (and functional) separation practiced in other parts of the world (most notably, large parts of Europe). Facilities-based competition means a telecom marketplace populated by competing networks (facilities): the Bell network competing with Rogers, TELUS, Shaw, and whoever else can afford to build telecom infrastructure. It has never been clear just how many competing networks there should be (with the exception of wireless, where the previous government seemed committed to bringing about four national competitors). However, while incumbent participants at the Blockbuster love to emphasize just how hard they compete with one another, the CRTC has repeatedly indicated that the current state of competition leaves a lot to be desired. Although Canada has hundreds of service providers, their facilities often do not overlap. Incumbents are sometimes classified as operating either inside or outside of their “territory”, and are reluctant to “overbuild” facilities where these already exist in a competitor’s territory (hence, Bell and TELUS have been repeatedly criticized at the Blockbuster for “sharing” facilities in their respective territories). Smaller competitors have sometimes wondered just how many competing wires the world of facilities-based competition imagines going into each home, and where the money to build all of these competing wires is meant to come from.

The CRTC has tried to address the inadequate state of competition in Canadian telecom through the mandated wholesale regime. The original idea (known as the stepping-stone or ladder-of-investment theory) was that small competitors could use the facilities of incumbents until they grew to have competing facilities of their own. Once some adequate number of competing facilities had flowered, the hand of regulation could fall away, and the market would take care of the rest. However, this never happened.

Instead, mandated access seems to be here to stay, and regulators talk a lot less about facilities-based competition than they used to.

The 2016 Telecom Summit

You can see the changing view of the mandated access regime through the past six years of the Blockbuster. By this year’s event (concluded earlier this month), the legitimacy of the regime was hardly raised as an issue (although Ted Woodhead from TELUS did remind everyone that the job of the CRTC had been to promote facilities-based competition, and that’s what “got us to being a leading broadband nation in the world”). A somewhat bigger concern was whether the CRTC was flouting the “law of the land” by effectively ignoring the 2006 Policy Direction — a document that was in many ways the high-water mark for the idea of facilities-based competition. There’s some dissonance in a regulator that has to justify its actions with reference to a document from a previous era in policy. Since the Policy Direction still stands, every decision the Commission takes is haunted by the ghost of Maxime Bernier reminding Canadians that they live in “a capitalist country, a country of freedom, and that regulation must be as limited as possible, to allow market forces to play out, particularly in telecommunications.”

Since 2006, we’ve seen a decade of continued mandated access, and a gradual acceptance of the fact that this regulatory approach is here to stay, even if we’re not clear on what the outcome is meant to look like. The recent expansion of mandated access to fibre seems to aim for a world of competing “middle-mile” networks, since the CRTC recognized that competitors “cannot feasibly or practically duplicate” last-mile wired networks (the part of the network that physically runs into your home).

I should note that the ghost of Maxime Bernier haunting the CRTC is just the imprint of his time as Minister of Industry between 2006 and 2007. The man himself is very much alive, seeking the leadership of the Conservative Party, and also spoke at the 2016 Telecom Summit. There, he lamented that the CRTC “seemed to take the Policy Direction seriously for a few years” before it “reverted back to its old ways”. Echoing incumbent positions at the Blockbuster (and deploying the wisdom of Ronald Reagan), Bernier asserted that the CRTC had failed to recognize just how much competition there was in Canadian telecom, which led him to conclude that the Commission should get out of telecom regulation altogether.

At the 2016 Regulatory Blockbuster, there were no calls for the CRTC to get out of regulating telecom competition and wholesale access, but incumbent participants gave their usual warnings about the harms of regulation, and much of the discussion was about what the role of the CRTC should be in these times. The first set of opinions was on Chairman Blais’ remarkable statements about digital strategy during the Basic Services hearing. Then (after a suggestion for CRTC procedural reform floated by Mirko Bibic), discussion turned to Commission’s relationship to industry and the public. Incumbents expressed the desire for a better way to sit down and talk with the CRTC, and even PIAC’s John Lawford voiced agreement that things had gotten out of hand in recent hearings — with so many diverse voices pulling the discussion every which way. The Commission has tried to do a better job including the public, and recently numerous people have been engaging with the process for the first time. Admittedly, hearings would run more smoothly if there was a single voice speaking for the public interest, but that’s not the direction things are headed.

The rest of the time was spent discussing those topics that have come to the fore depending on the regulatory cycle and the whims of politicians. The biggest of these was the Basic Services review (and how to fill various gaps in connectivity), but Quebec’s Bill 74 also came up for discussion. While most Canadians haven’t heard of this issue,  telecom lawyers are seriously worried about what it means for a province block websites in order to maintain control over gambling.


So what do you learn from watching close to ten hours of Canadian telecom lawyers on a stage? First, as someone who tries to study changes in telecom policy, the archive of these videos is a very valuable resource, for which I’m grateful to the Summit organizers, CPAC, and the participants who put themselves up there each year.

Secondly, some new regulatory issues come into play at each Blockbuster, and some things stay the same. Facilities-based incumbents are going to keep advocating for facilities-based competition, but in 2016 this means pointing to a previous era in telecom policy. Incumbent representatives at the Blockbuster like to fondly remember previous iterations of telecom regulation (remember when government said it would let the market sort things out?), because today’s regulatory environment seems more hostile and just plain confusing.

What was once meant to be a temporary scaffold (mandated access) has become an enduring regime. Facilities-based competition was once the goal of regulatory liberalization, but at the CRTC it has now either shifted in meaning (from the last mile to the middle mile), or describes some competitive ideal that will always be out of reach. Since there seems to be no appetite for getting rid of mandated access regulation on the one hand, or for doing away with the goal of competing private networks on the other, this ambiguity seems set to continue for a long time.

Cabinet Rejects Bell’s Wholesale Appeal

DSCN5972cropToday, we learned what the Government of Canada thinks about Bell’s petition to overturn the CRTC 2015-326 Telecom Regulatory Policy, which will open fibre networks to wholesale access. I’m not sure if anyone is surprised by this decision, since there were no indications that the Liberal cabinet (namely, Navdeep Bains, Minister of Innovation, Science and Economic Development) was predisposed to favour Bell’s position. In fact, there hasn’t been much indication of what the Liberal government’s stance is on telecom policy, or how it differs from the previous government. As a result, many are looking at this decision as a “first hint” of what to expect.

So, let me join the speculation about what this 200-word government statement really means:

First, cabinet recognizes that “wholesale broadband is a proven regulatory tool for enabling retail competition in the Internet service market”. This aligns with the increased legitimacy granted to wholesale access by the previous government, along with the CRTC’s decisions in recent years. The wholesale access regime is no longer imagined as some temporary stepping stone to facilities-based competition; mandated wholesale is here to stay. If the CRTC wants to focus the scope of facilities-based competition on the middle-mile, that’s fine, but this government values retail competition and consumer choice.

This government also seems to be playing it safe and leaving its options open. Supporting the CRTC is the default choice for cabinet, and there’s no strong reason or principled policy here for doing otherwise. The language used by the Minister echoes the Conservatives’ consumer-focused telecom populism, but it also indicates that the government’s telecom policy boat is maintaining its current heading. If this continues, the Liberals could simply avoid leaving their mark on telecom policy and manage the file according to a familiar pattern: espousing the importance of competition, supporting access to incumbent facilities, and distributing one-time injections of funding to individual broadband projects.

The other option would be for the Liberals to do something distinctive, which is probably what CRTC Chairman Blais was hoping for when he brought up the lack of a broadband policy in this country. There’s still no reason for me to believe that any distinctive digital policy in the works, and if it is, it will likely be a long time coming as the Liberals have plenty already on their plate. In the short term, the Bell-MTS deal could be another opportunity for the government to spell out what its vision of a competitive telecom industry looks like. However, my guess is that we will learn more from the government’s decision in that deal than whatever brief statement accompanies it.