The House of Commons is now on summer break, but before everyone headed off, the The Trudeau/Goodale Liberals introduced a monumental rework of Canadian intelligence and security institutions. This accomplishes some of what the Liberals previously indicated, but as Wesley Wark points out, such substantial changes to Canada’s national security bureaucracy are surprising. The implications are complex, with major reform for those overseeing CSIS and CSE (two new institutions: the National Security and Intelligence Review Agency and the Intelligence Commissioner) and changes to CSE’s mandate.
Experts and politicians have some time to chew on this bill’s different aspects, and for all things CSE, an important view is the Lux Ex Umbra blog. However, here I want to offer a couple of thoughts on the cyber aspects of the reforms. As others have pointed out, these reforms will help to normalize certain types of acts (network exploitation and attack). One argument is that Canada’s new framework will help normalize in the international arena what a lot of states have been doing covertly, under dubious legal authority — “effects” like hacking and exerting influence in various domestic and foreign jurisdictions. The Canadian approach could either be a model for others interested in legal reform, or contribute to making these actions more acceptable and legitimate around the world. Domestically, this is also a normalization of the sorts of things that CSE has done, or wanted to do, for some years now.
In Canada, we have never had a debate about these questions. The national security consultation that ostensibly informs this move was not designed to ask them. Canada’s role in the Five Eyes is not under revision, and Bill C-59 is meant to better “align ourselves” with these cyber “partners”. The partners are meeting this week, amid an active push by allies (specifically, Australia) to get Canada’s cooperation in countering encryption. There’s little indication where Canada stands on these questions today. However, given what appears to be our holding-steady with the Five Eyes and C-59’s new legal framework, CSE can still end up promoting insecurity, in secret, at our allies’ request.
Ultimately, the success of C-59 will depend on how effective the new accountability mechanisms are. Canada’s previous experience includes government assurances about legal compliance and oversight, while routine illegality and surprising legal interpretations are carried out in secret. Some of this previous experience (like the CSIS ODAC database) is addressed in C-59, but on the must fundamental question — what kind of security will Canada promote in the world? — we seem to be doing what Canada has done since we hitched our national security to the U.S. in late WWII: defaulting to our allies. We may have some bold new security legislation (and a Minister of Foreign Affairs who recently made big statements about the need to “set our own clear and sovereign course“), but old concerns about the lack of a distinctly Canadian approach to international and cyber security are as relevant as ever.
Recently, I was reading through an edited collection titled The turn to infrastructure in Internet governance. Few of the chapters held my interest for long, and for a book supposedly about the infrastructure ‘turn’, too many of the topics had already been well-covered in the internet governance literature (like organizations devoted to internet governance and the DNS). In the book’s introductory chapter, DeNardis and Musiani write:
…there is increasing recognition that points of infrastructural control can serve as proxies to regain (or gain) control or manipulate the flow of money, information, and the marketplace of ideas in the digital sphere. We call this the “turn to infrastructure in Internet governance.” As such, the contributions in this volume… depart from previous Internet governance scholarship, by choosing to examine governance by Internet infrastructure, rather than governance of Internet infrastructure. (p.4)
I largely want to put aside the question of how well the contributions in the book achieve this, and just focus on the topic of ‘governance by infrastructure’, and what this means. First, governance by infrastructure necessarily implies governance of infrastructure, but the emphasis shifts to particular features of infrastructure as points of control through which various social processes can be governed. So what do we mean by infrastructure? For DeNardis and Musiani, citing Bowker and colleagues:
…the term “infrastructure” first suggests large collections of material necessary for human organization and activity—such as buildings, roads, bridges, and communications networks. However, “beyond bricks, mortar, pipes or wires, infrastructure also encompasses more abstract entities, such as protocols (human and computer), standards, and memory,” and in the case of the Internet, “digital facilities and services [ . . . such as] computational services, help desks, and data repositories to name a few… Infrastructure typically exists in the background, it is invisible, and it is frequently taken for granted. (p.5)
When it comes to the internet, infrastructure is more than just the ‘plumbing’ — it includes ‘abstract entities’ and social organizations, and this inclusive understanding might lead us to see all sorts of traditional internet governance studies as studies of infrastructure. So let’s try to narrow the focus to what makes infrastructure distinctive, besides the fact that it is frequently invisible.
Common definitions of the term discuss infrastructure as foundations, frameworks, and whatever provides support for something. There is a lot of overlap with the definitions of a public service or utility here, and this is why we typically think of electricity, water, and roads as infrastructure — without the underlying support of these systems or networks, countless social processes would grind to a halt. The early internet supported particular and specialized kinds of activities, but today it’s easy to see our digital networks as underpinning communications and social relationships in general, and therefore functioning as a kind of public good.
By seeing the internet as infrastructure, we might ‘turn’ to look at all of the ways it contributes to our daily lives. Much of this support is effectively invisible, and only comes to our attention when it stops working. The closer we get to the future promised by the Internet of Things, the more disruption will be experienced by these outages. This is reflected in the classification of telecom network as “critical infrastructure” — a category that has been the focus of government concern in recent years, leading to a proliferation of partnerships, policies, frameworks, and standards.
Critical infrastructure is governed so that it does not break, or that it continues to provide essential services with minimal interruption. This is a developing and little-publicized topic (given the overlap with national security) so this sort of ‘governance-of-infrastructure’ has actually not received much internet governance scholarship. In contrast, the ‘governance-by-infrastructure’ that DeNardis and Musiani identify is about more than keeping the lights on and the data packets moving, and if we’re going to take this infrastructure turn seriously, one of the most important places to look is at ISPs as points of control. The idea that society can be governed by ISP responsibilities is now an old one, but remains a common approach. ISPs have obligations to connect to each other (or other institutions), and are called upon to monitor, increase, shape, limit or filter connectivity. Google and Facebook may have become massive operators of infrastructure, but last-mile and middle-mile networks remain essential chokepoints for internet governance.
ISPs are inextricably dependent on material infrastructure, since they are fundamentally in the business of moving packets to and from customers through a physical connection. Even wireless ISPs are limited by the laws of physics, as only so much information can be carried through the air (where it is also susceptible to interference). Accordingly, wireless ‘spectrum’ is carefully divided between intermediaries and managed (in Canada) by ISED as a precious resource – with spectrum licenses auctioned to intermediaries for billions of dollars (licenses that come with public obligations). Owning license for spectrum is quite a different matter from actually using it, and to serve millions of customers, further billions of dollars must be invested in a system of towers and their attendant links. The wired infrastructure of ‘wireline’ ISPs can be even more expensive, since cable must run to each individual customer, requiring kilometers of trench-digging, access to existing underground conduits, or the use of privately-owned utility poles. This means that the rights-of-way which secured the early development of telephone networks remain important for anyone deploying wired infrastructure, further privileging incumbents who own conduit or have access to utility poles. These rights-of-way are also one of the only ways municipal governments can control telecom infrastructure, by negotiating or referring to municipal access agreements. However, struggles between municipalities and intermediaries over access to right-of-way can also be quite contentious, and may also be adjudicated by the CRTC.
Finally, as with all things, I’m interested in the language we use to discuss these topics. Calling something infrastructure implies something different than utilities or ‘public works‘, but all three indicate a relation to an underlying public interest. Since so much of it lies in private hands, infrastructure is currently the preferred expression, but even this term reminds us that we all jointly depend on these corridors, poles, pipes, electronics, and the people who keep it all running.
Last week the CRTC released an important net neutrality policy (Telecom regulatory Policy 2007-104) that got a lot of people talking. There’s been coverage by Dwayne Winseck, Michael Geist [1 & 2], Timothy Denton, Peter Nowak [1 & 2], and foreign reporting that understandably used the FCC’s approach in the U.S. for contrast. Jean-Pierre Blais reflected on the process in a recent interview (in which he also stated that the recent basic service decision was as close as the CRTC could come to recognizing broadband as a human right).
I’ve written about differential pricing before, and feel no need to summarize the decision here, or the decision-making framework it establishes, but there are some elements that stood out for me. First, this is the CRTC’s most explicit discussion of net neutrality ever. The term net neutrality didn’t even appear once in the earlier decision on differential pricing, and there has previously been a tendency to frame these topics in the regulatory language of ITMPs. Now the CRTC has embraced common lingo, and the latest regulatory policy is expressly “part of the broader public policy discussion on net neutrality. The general concept of net neutrality is that all traffic on the Internet should be given equal treatment by ISPs” . Elaborating its definition of net neutrality, the CRTC states that “net neutrality principles have been instrumental in enabling the Internet to grow and evolve as it has”. These principles include innovation without permission, consumer choice, and low cost of innovation (low barriers to entry). Here we have the CRTC laying out some internet values — what made the internet so successful and what needs to be preserved (see Timothy Denton’s laudatory post). This document is remarkable because it lays out something approaching an ideal vision for Canadian telecom, with the internet as a central part. There were elements of this in the 2009 ITMP decision, which together with the recent differential pricing decisions (and subsection 27(2) of the telecom Act) now “effectively constitute Canada’s net neutrality code” .
For the rest of this post, I’d like to take a closer look at what the CRTC imagines or desires for Canadian telecom, specifically the roles of different actors and their relations. First, ISPs are common carriers , which generally means they are prohibited from discriminating or picking favorites among content. Chairman Blais has since said he thinks this CRTC decision will “reinforce the fact” that ISPs are “mere conduits”, playing a limited role in carrying information from one place to another. Once ISPs start making decisions about content they become gatekeepers to that content, and other concerns come into play (including net neutrality and copyright). Differential pricing can be used for just such a gatekeeping function, which would have “negative long-term impacts on consumer choice” as the CRTC predicts ISPs would make deals “with only a small handful of popular, established content providers – those with strong brands and large customer base” .
The scenario that worries the CRTC is one where vertically-integrated ISPs use their control over internet traffic to direct consumers to their own content or that of their partners. Differential pricing is one way of controlling consumer behavior, but arguments in favor of the practice say that it provides consumers with choice, and allows ISPs to innovate and compete through these offerings. In response to these arguments, the CRTC was forced to lay out its vision for innovation and competition. Unsurprisingly, the CRTC’s vision is for ISPs to engage in the noblest form of competition: facilities-based competition: “when ISPs compete and differentiate their services based on their networks and the attributes of the services on those networks, such as price, speed, volume, coverage, and the quality of their networks” . The most important innovations aren’t “marketing practices” like zero-rating, but improvements to ISPs’ networks . ISPs should focus on the internet’s plumbing, and consumers will choose the superior network.
While ISPs are imagined to be competing for customers based on the quality of their networks, competition for services is best served by the “open-nature of the Internet”, which allowed “today’s large, established content providers” to grow and innovate. “In the Commission’s view, new and small content providers should enjoy the same degree of Internet openness in order to innovate, compete, and grow their businesses” . Since ISPs are envisioned as pipes, innovation in content should come from the edges of the network (or at least, that possibility should remain open). Content providers need to be able to enter the market and practice ‘permissionless innovation’, by giving consumers what they want without needing to cut a deal with each ISP that controls the last mile .
If we are trying to achieve something like a level playing field for content providers, then we can’t ignore the massive advantages that established content giants currently enjoy, and I wonder what else we might do to lower barriers to entry? Perhaps the whole idea of an ‘eyeball network‘ is an obstacle, where the network’s users are imagined principally as consumers watching a one-way information flow. This may be fine if it’s easy for a new content provider to compete for eyeballs, but that’s not the case today unless a you’re depending on an established content service (YouTube, Netflix) as an intermediary by having them carry your stuff. If we wanted to develop new ‘content’ in Canada, we need to recognize that in much of the country incumbent ISPs already act as the gatekeepers. If I wanted to start a new content service from my metaphorical garage, I would only be able to reach the global internet on my incumbent’s terms. These terms might include prohibitions on uses of their network, and the ISP’s control over addressing through NAT (imagine a world where every device could have a unique IP address…). Now imagine if I could easily get fibre to an internet exchange where I could connect to various international carriers… As with facilities-based competition, I think it’s important to try to imagine what an ideal world would look like when we’re talking about innovating and accessing diverse content over those facilities. As with facilities-based competition, I worry that the CRTC is most concerned with preventing existing concentrations of power from getting worse, than taking active steps to realize a specific vision.
This post will offer some reflections on the Digital Futures Symposium on broadband, held March 16 & 17 in Cochrane, Alberta, and updates on Alberta’s SuperNet.
I attended the first Digital Futures Symposium in Calgary in 2013, which turned out to be a great opportunity to learn about topics that I was becoming very interested in, like the SuperNet and the work that was underway to turn Olds into a gigabit community. At the end of that event, it was evident that there was a lot of frustration in rural Alberta over inadequate connectivity, but there wasn’t much going on to address this frustration. The organizers (academics with the Van Horne Institute) and some of the participants expressed a desire to keep working on these issues through some sort of ongoing collaboration. While I had my doubts about what this would produce, three and a half years later Digital Futures has become more relevant and useful than ever, and it is now just one of numerous efforts around to province to collaborate on rural broadband.
The group of academics organizing the Symposium has seen some change of personnel (one of the original Professors from Van Horne is now CRTC Commissioner Linda Vennard, who visited and participated in that capacity), and each of the meetings sees new faces coming with their own local concerns and questions. This latest Digital Futures was attended by some of the actors that were notably missing in 2013 — TELUS was one of the sponsors, and was there to make clear that it was very interested in working to meet the needs of local communities (some people were recently doubting the incumbent’s interest in rural Alberta). Axia, an original sponsor, also came with a substantial delegation.
Digital Futures hosts an interesting mix of municipal and regional leaders, and now also gets more attention from provincial government (at the federal level, there was also an ISED policy presentation). For me, the most interesting presentation was by Stephen Bull — Service Alberta’s Assistant Deputy Minister for the SuperNet Secretariat. By the sounds of it, Bull has made a significant impression in his first year on the job, and at last week’s Digital Futures he provided some important statements about SuperNet, at a time when the future of the network is at an important juncture (see previous post).
As previously mentioned, people in charge of SuperNet tend to spend a lot of time countering misconceptions about it, and so Stephen Bull’s presentation was organized around a series of SuperNet “myths” (a very different set than those addressed by Axia). Here are some of the most interesting bits from the presentation about where things currently stand:
-The SuperNet contract will be decided before the end of the summer. Axia, TELUS and Bell are pre-approved to submit for an RFP, but it sounds like the Government of Alberta (GoA) is still figuring out what it wants. A key question is what role different actors are going to play (local champions, different levels of government, the “ISP community”). The Premier has had one briefing on the issue, but asked for a second one — so this file has her attention, and things seem pretty wide open.
-What does the GoA think about SuperNet as it currently exists? Well, according to Stephen Bull, the primary rationale for the network (connecting public facilities) was achieved, but last-mile connections for rural properties are a big outstanding issue. Service Alberta counts 36 ISPs in Alberta (others at the Symposium counted 38-40), but that doesn’t mean there is last-mile competition across the province, and we should be realistic about what market forces can achieve (“Myth #6: The private sector will solve this issue”). SuperNet 2.0 seems like it will continue to have the goal of improving connectivity beyond public institutions, but Service Alberta seems aware that this has been a key weakness of SuperNet 1.0, and wants to improve how ISPs (including new, community-owned ISPs) connect.
-Stephen Bull didn’t have very positive things to say about the existing SuperNet contract, and provided some fascinating background about how Bell and the GoA’s interests were negotiated in 2005. The result was a poorly-written contract that’s open to interpretation, provides few enforcement options for the government, and isn’t clear on the roles of the different parties. Presumably, even if the fundamentals of the relationship stay the same (with Axia maintaining its role as operator), these issues will be cleared up.
-One big question is just what the current state of the network is, and a government audit is underway to figure this out. In 2035 the GoA has the option to buy the Extended Area Network (currently run by Axia) for $1, but what exactly would they be buying? It wouldn’t a singular network, because a lot of it is composed of leased fibre lines. Also, there are old electronics in need of repair (maintenance costs, previously covered by Bell, are actually a big part of the reason for revisiting the contract).
-The advice for communities is to “think very carefully before entering into any long-term agreement with an ISP before the future of SuperNet is known”. The worst-case scenario is a well-connected community that goes dark because something happens to the ISP (the advice being to include an option for a community to buy the infrastructure if an ISP leaves a jurisdiction).
-Finally, Stephen Bull expressed his perceived need for a provincial broadband strategy, which in his view would require finding a Ministry with the funding, capacity and will to do it (this is not the Service Alberta mandate). If no one is willing to take the lead on this at the GoA, folks at the Symposium wondered if we could produce something more bottom-up, and get the GoA’s blessing. Some of this is now being coordinated through the Van Horne Institute, and will be the next step for some Digital Futures participants.
Big-picture takeaways from the Cochrane Symposium:
In the two years or so since I attended Digital Futures 2015, broadband issues have exploded across rural Alberta. This hasn’t been uniform by any means (things are moving faster in the South than in the North), but whereas a couple of years ago it was tough to get many local governments to take the issue seriously, now councils have generally recognized how vital broadband is, and many are trying to improve connectivity. They’re working in partnerships with each other or making their own industry deals, principally with Axia. For a lot of regions and communities doing this, an early step is to get a consultant to tell them what the options are, and from the sounds of it Craig Dobson from Taylor Warwick has been winning the contracts for most of this work. A lot of rural communities are still at this research stage, but the hurdle of convincing rural governments that the internet matters has mostly been overcome.
What’s striking is the diversity of approaches to connectivity that are being discussed, although many of these exist only in potential. To paraphrase Lloyd Kearl (from Cardston Country and AlbertaSW), public solutions take time: you have to engage with citizens and various political, as well as commercial organizations. Private industry can move quickly, and indeed TELUS and Axia have been busy putting fibre in the ground, while public bodies deliberate taking a more active role in providing connectivity (the story of Olds is ever-present in these deliberations). In the next few years, we will see what these alternate approaches to connectivity in rural Alberta will amount to. In the short term, the big question is still what will happen with SuperNet 2.0…
Alberta is home to a remarkable fibre-optic network called the SuperNet, and the provincial government is about to decide what to do with it. This post will briefly summarize how this situation came to be, and what’s at stake in the forthcoming decision about “SuperNet 2.0“.
At the end of the 1990s, Alberta was riding high on oil revenues and the promise of internet-enabled prosperity. The provincial government decided to invest in a network that would connect government and public buildings (such as schools and medical facilities) across the province. The need for public sector connectivity was combined with the need for rural internet access, and the idea was that last-mile ISPs would be able to plug into the SuperNet as a middle-mile network to reach towns and villages across the province. Economic development would be extended beyond the cities, bridging the digital divide. In those heady days, there was talk of luring Silicon Valley businesses, like Microsoft or Cisco, to rural Alberta. Entrepreneurs and knowledge workers would set up shop in small towns, rural patients could be diagnosed through telehealth, and university lectures could be beamed into remote schools.
The 2000s followed a decade of telecom liberalization and provincial privatization, including privatization of telecom assets (AGT), so the last thing the provincial government wanted was a publicly-owned network. Science and Technology Minister Lorne Taylor (credited with leading the SuperNet’s development) made clear that running telecom networks was the business of private industry, not government. The CTO of Alberta Innovation and Science emphasized that it was definitely not a government network. Government wasn’t going to build it, wouldn’t own it, and wouldn’t manage it. The private sector would be unleashed and competition would take care of the rest. All government had to do was throw in $200 million and set the terms of the deal.
As Nadine Kozak writes, the SuperNet was a contract, and not public policy. The contract was signed without public input or legislative debate. Citizens would be consumers of the network, and didn’t need to know the details of the deal, which was complicated and confidential. The contract would have to be renegotiated after construction fell behind and private sector partners Bell and Axia had a legal fight about not living up to their respective terms. The network was eventually completed without fanfare in 2005, with Bell eating the additional costs of the delay. Following another renegotiation of the contract in 2005, Axia would run the SuperNet for thirteen years (including the three-year extension granted in 2013), and the government would have the option of assuming ownership of the rural network after thirty.
Public infrastructure in many rural communities did receive a considerable boost in connectivity thanks to SuperNet, but the province never did become Silicon Valley North, and the last mile of the network only extended to public sector clients. It was imagined that private ISPs would connect to the network and compete with each other over the last mile for residential and business customers (see below), but in much of rural Alberta this never happened. Local incumbent TELUS preferred to use its own network, even choosing to (over)build additional facilities in places where it would have been cheaper to use SuperNet.
Meanwhile, government responsibility for the network shifted or split between departments through successive reorganizations. In 2010, Premier Redford stated, “We haven’t focused on it as a priority … (It) seems to have been more of a problem between government departments not wanting to take ownership, or not knowing exactly who’s the leader”. For those who don’t have to deal with it directly, SuperNet is just another piece of the invisible infrastructure that keeps our world running, and today, most Albertans have never heard of it.
Axia is a remarkable company in the Canadian telecom industry, and the SuperNet contract was key to making it what it is today. Axia has since promoted or developed similar open-access fibre networks in several countries, but seems to have recently re-focused on Alberta. When it comes to the SuperNet, its prime responsibility has been to run the network (as Axia SuperNet Ltd.). In this capacity, Axia serves public sector clients, and acts as an “operator-of-operators” for ISPs wishing to connect to SuperNet for backhaul. In line with the principles of running an open-access network, Axia is not supposed to compete with the last-mile ISPs, or offer internet access to residential and business clients through SuperNet. Axia has also helped produce lots promotional content over the years about the SuperNet’s accomplishments and the “unlimited possibilities” offered by this totally amazing network.
On the other hand, Axia’s actions indicate that the company clearly recognizes the limitations of SuperNet, and has worked to address these through Axia Connect Ltd., a separate business endeavour from Axia SuperNet Ltd. (see this recent CRTC appearance by CEO Art Price on the distinction). What Axia SuperNet Ltd. cannot legally do (act as a last-mile ISP), Axia Connect can and does. Whereas Axia SuperNet Ltd. does not compete with private industry in the last mile, Axia Connect has been putting many millions of dollars into last-mile connections, focusing its efforts on deploying FTTP to parts of Alberta hereto neglected by incumbents. In the process, Axia is helping resolve the digital divide in a way that the SuperNet could not, but it is also competing with other approaches to the same problem, such as those currently being pursued through the Calgary Regional Partnership.
The distinction between Axia SuperNet and Axia Connect has kept the company compliant with the terms of the SuperNet contract, but claiming that Axia Connect’s FTTP deployments are “made possible by having access to the SuperNet” doesn’t help the public draw this distinction. Axia’s brand in Alberta is intimately linked to SuperNet, and for the first time, we are forced to consider what a decoupling might look like. This is because the SuperNet contract is once again up for renewal, except this time, Axia is not being granted a simple extension. Even if the company successfully wins the contract for the next term, the government seems to be looking at a “new vision” for the deal.
In short, the situation in Alberta is as follows: The SuperNet is legacy infrastructure, largely built or acquired from existing fibre assets in the early 2000s, and for now it should still be a valuable network with a lot of potential. Observers from other parts of Canada have sometimes looked at it with envy, but the project’s history has been troubled, and SuperNet has only achieved part of its original vision. The existing (and “increasingly-out-of-date“) contract expires in June 2018, with a decision on SuperNet 2.0 expected soon, and Axia, Bell, TELUS, and Zayo competing for the contract. Will a traditional incumbent become the government’s private sector partner? How messy would a transfer or responsibilities from Axia be, should the company lose the bid? If Axia wins, how will the deal be restructured to address the shortcomings of SuperNet 1.0? These are the big questions right now.
Meanwhile, broadband is a hot topic in rural Alberta, with active regional discussions, like an upcoming Digital Futures Symposium in Cochrane, the related Alberta Broadband Toolkit, municipal collaboration through the Calgary Regional Partnership, and broadband studies being carried out by the REDAs. TELUS has also been active with fibre upgrades, and there is a “land grab” underway as rural communities examine competing models of connectivity and decide how best to meet their needs. Some communities are trying to convince Axia Connect to build them a local network (by demonstrating there are enough interested subscribers), while others are collaborating on a middle-mile backhaul option (skipping the SuperNet), or considering investing in a publicly-owned last-mile network (usually a choice between dark fibre, lit fibre, and wireless). It’s hardly a broadband gold rush out there in rural Alberta, but this is the most exciting I’ve seen it since I started paying attention several years ago.
Lots of dimensions here left to cover, and new developments expected. More Alberta explorations and updates to follow!
Should all Canadians have access to broadband? The answer these days is almost invariably yes, but the more specific questions that follow are: How do we connect those without access (whose responsibility is it, who should pay for it), and what counts as broadband anyway?
The latter question results in different definitions or ‘targets’ for connectivity, most often as upload/download speeds, which can be mandated (hard targets) or ‘aspirational’ (soft targets). These targets often lag behind how people actually use the internet, presuming some ‘basic’ form of connectivity that doesn’t involve streaming media or uploads. The CRTC just revised such a target, from 2011’s measly 1 Mbps up and 5 Mbps down, to ten times that (10 & 50 Mbps), under the rationale that this level of connectivity is currently vital for Canadians. This is also presented as a forward-looking approach for a gigabit world, since the CRTC asserts that “the network infrastructure capable of providing those speeds is generally scalable, meaning that it can support download and upload speeds of up to 1 Gbps“.
The CRTC’s revised broadband target was the result of the basic service hearings (see previous post), which also led to a number of other decisions within a new regulatory policy (2016-496). These include forthcoming targets for latency, jitter, and packet loss, a new funding mechanism for extending broadband networks, and accessibility requirements for Canadians with disabilities. But while the specifics of these policies are important, the broader shift that has taken place was signaled by Chairman Blais’ decision to interrupt the hearings with a statement about just how vital broadband has become for Canadian “economic, social, democratic and cultural success”. This sentiment is echoed in the newly-written policy — Canadians require broadband to participate in society, even if this society tends to be characterized as a “digital economy”, with “social, democratic and cultural” dimensions getting less emphasis. Still, around twenty years after the arrival of the public (commercial) internet in Canada, the CRTC has finally declared that broadband is a vital need for all, and not some optional luxury.
All of this has happened in the same regulatory policy that signals a movement away from what was once considered a vital need for society — universal telephone access. In today’s world, differentiating digital networks from POTS (plain old telephone service) is increasingly pointless, but the CRTC’s decision works to “shift the focus of its regulatory frameworks from wireline voice services to broadband Internet access services“, creating a new “universal service objective” for broadband.
Universal telephone service was a great twentieth-century achievement in Canada, although there seems to be some controversy among telecom policy folks whether this resulted from regulation or the initiative of private industry. Positions on the matter seem to depend on whether one wants to credit industry or public policy, because for nearly all of the twentieth century (particularly since 1905) the two are hard to distinguish. Whether it was formalized or not, universal service (achieved by using urban networks to subsidize rural ones) was a key pillar of the monopoly era. Once the telephone ceased to be a luxury good, telephone companies were expected to honor the principle of universalism, and extending twisted copper to every home became part of the great nation-building project. However, the internet arrived at the close of the monopoly era, and the old telephone network was inadequate for what we would consider to be broadband today. As with telephony, internet access was initially seen as a luxury. Now that it is basic and vital, the existence of populations without access to broadband is a problem that cannot be ignored.
And so, we have a new universal service objective for broadband in Canada, we will soon have a new pot of money that can be awarded to companies to work towards it, but on the bigger issues of connectivity and digital policy, we are still waiting for coherence.
Another federal government consultation has recently wrapped up, this time with Public Safety asking about national security. Like other ongoing consultations, this one was criticized (for example, by Christopher Parsons and Tamir Israel) as framing the policy issue in a way that the government prefers, and trying to legitimate some ideas that should have been discredited by now. I would say that the consultation framed the issue very much as Public Safety (for instance, the RCMP) would prefer, repeating old rationales, and seeing the world from a perspective where the ability to exercise sovereign will over information flows is paramount. The Green Paper provided for background reading foregrounds the concerns of law enforcement & security agencies, is peppered with the words “must” and “should”, advancing some dubious assumptions. Public Safety asked for feedback on terrorism-related provisions (including C-51), oversight, intelligence as evidence, and lawful access. The last of these has seen a number of previous consultations, but is back in the news as police make their case for the issue of “going dark” (which has become part of the RCMP’s “new public narrative” for a set of concerns that were once broadly talked about as lawful access).
I let this one get away from me, so I didn’t have anything ready for Dec. 15 when the online submission closed. Regardless, I’ve decided to complete most of the questions related to the topic of Investigative Capabilities in a Digital World as a blog post. I don’t feel particularly bad for missing the deadline, since several of these questions border on ridiculous. For a true public consultation on what has long been a very contentious issue, it would be important for the questions to be informed by the arguments on both sides. Privacy experts would have asked very different questions about privacy and state power, and on a number of topics Public Safety seems to be trying to avoid mentioning the specific policies that are at stake here.
How can the Government address challenges to law enforcement and national security investigations posed by the evolving technological landscape in a manner that is consistent with Canadian values, including respect for privacy, provision of security and the protection of economic interests?
When I think of Canadian values, “privacy, provision of security and the protection of economic interests” are not what come to mind. When I ask my students what they associate with Canada, these particular values have never come up in an answer. I think we should consider democracy as a fundamental value, and understand that state secrecy is antithetical to democracy. When it comes to the relationship between citizens and the state, Canadian values are enshrined in the Charter, and the Supreme Court is ultimately responsible for interpreting what is consistent with the Charter. Therefore, Canadians deserve to understand what is being done in their name if we are to have a meaningful democracy, and this includes the existence of an informed, independent judiciary to decide what government actions are consistent with Canadian values.
In the physical world, if the police obtain a search warrant from a judge to enter your home to conduct an investigation, they are authorized to access your home. Should investigative agencies operate any differently in the digital world?
If we accept the digital/physical distinction, the answer is a definite yes — investigations carried out today operate differently than they did in the simpler, more “physical” 1980s. But it is important to keep in mind that analogies between the digital and physical environment can be misleading and dangerous. When it comes to the “digital world”, I prefer to talk about it in digital terms. The stakes are different, as are the meaning of terms like “to enter”. If we must make these comparisons, here is what treating these two “worlds” as analogous would mean:
The police can enter my home with authorization, and seize my computer with authorization. I am not required to make my computer insecure enough for the police to easily access, just as I am not required to keep my home insecure enough for the police to easily access. I am not required to help the police with a search of my home, and so I should not be required to help police search my computer. If I have a safe with a combination lock in my home, I cannot be compelled by police to divulge the combination, so by analogy, I should not be compelled to divulge a password for an encrypted disk.
But analogies can only take us so far. A computer is not a home. Metadata is not like the address on a physical envelope. We need to understand digital information in its own terms. To that end, some of the more specific questions found further in this consultation can produce more helpful answers. Before we get to these however, this consultation requires me to answer a couple more questions based on the presumption of digital dualism.
This question is hard to answer without knowing what it means to “update these tools”, and seems to be intended to produce a “yes” response to a vague statement. Once again, digital/physical comparisons confuse more than they clarify — these are not separate worlds when we are talking about production orders and mandating the installation of hardware. We can talk about these topics in their own terms, and take up these topics one at a time (see further below).
Is your expectation of privacy different in the digital world than in the physical world?
My answer to this question has to be both yes and no.
No, because I fundamentally reject the notion that these are separate worlds. I do not somehow enter the “digital world” when I check my phone messages, or when I interact with the many digitally-networked physical devices that are part of my lived reality. Privacy law should not be based on trying to find a digital equivalent for the trunk of a car, because no such thing exists.
Yes, expectations of privacy differ when it comes to “informational privacy” (the language of Spencer), because the privacy implications of digital information need to be considered in their own terms. Governments and public servants do Canadians a disservice with phonebook analogies, license plate analogies, or when they hold up envelopes to explain how unconcerned we should be about government access to metadata (all recurring arguments in the surveillance/privacy debate). In many cases, the privacy implications of access to digital information are much more significant than anything we could imagine in a world without digital networks and databases of our digital records.
Basic Subscriber Information (BSI)
As the Green Paper states, nothing in the Spencer decision prevents access to BSI in emergencies, so throwing exigent circumstances into the question confuses the issue, and once again seems designed to elicit a particular response that would be favorable to police and security agencies. In the other examples, “timely and efficient” is the problem. Agencies understandably want quicker and easier access to personal information. The Spencer decision has made this access more difficult, but any new law would still ultimately have to contend with Spencer. Government, police, and security agencies seem to be in a state of denial over this, but barring another Supreme Court decision there is no going back to a world where the disclosure of “basic” metadata avoids section 8 of the Charter, or where private companies can voluntarily hand over various kinds of personal information to police without fear of liability.
If the process of getting a court order is more onerous than police would like, because it would be easier to carry out preliminary investigations under a lesser standard, it is not the job of government to find ways to circumvent the courts. If the process takes too long, there are ways to grant the police or the courts more resources to make it more efficient.
There are ways to improve the ability of police to access metadata without violating the Charter, but any changes to the existing disclosure regime need to be accompanied by robust accountability mechanisms. Previous lawful access legislation (Bill C-30) was flawed, but it at least included such accountability measures. In their absence, we only know that in a pre-Spencer world, police and government agencies sought access to Canadian personal information well over a million times a year without a court order, and that a single court order can lead to the secret disclosure of personal information about thousands of Canadians. Police and security agencies have consistently advocated for these powers, but failed to document and disclose how they actually use them. This needs to change, and the fear of disclosing investigative techniques cannot be used to prevent an informed discussion about the appropriateness of these techniques in a democratic society.
Do you consider your basic identifying information identified through BSI (such as name, home address, phone number and email address) to be as private as the contents of your emails? your personal diary? your financial records? your medical records? Why or why not?
The answer to this question depends on an exhaustive list of what counts as BSI. It is important to have a clear definition of what counts as BSI, because otherwise we might be back in the pre-Spencer postion where police are able to gain warantless access to somebody’s password using powers that were meant for “basic identifying information”.
The answer to this question also depends on an explanation of what is done with this “basic” information. As was recognized in Spencer, we can no longer consider the privacy impact of a piece of personal information in isolation. This is how lawful access advocates prefer to frame the question, but this is not how investigations work in practice. BSI is useful only in combination with other information, and if we are talking about metadata (a term that curiously, never appears in the Green Paper) it is now increasingly-understood that metadata can be far more revealing than the content of a personal communication, when it is used identify people in large datasets, determine relationships between individuals, and patterns of life.
So in short, yes — I am very concerned about BSI disclosures, particularly when I don’t know what counts as BSI, and what is being done with this information.
Do you see a difference between the police having access to your name, home address and phone number, and the police having access to your Internet address, such as your IP address or email address?
I see an enormous difference. As previously discussed, these are not analogous. An IP address is not where you “live” on the internet — it is an identifier that marks interactions carried out through a specific device.
This is not a question… Yes all of this is true.
Should Canada’s laws help to ensure that consistent interception capabilities are available through domestic communications service provider networks when a court order authorizing interception is granted by the courts?
The key word here is “consistent”, and the question of what standard will be required. It would be very easy for government to impose a standard that large telecom incumbents could meet, but which would be impossible for smaller intermediaries. As things are, the incumbents handle the vast majority of court orders, so I would love to see some recent statistics on problems with ‘less consistent’ intermediaries, particularly if this is a law that might put them out of business.
I think the answer to this has to be never. People cannot be forced to divulge their passwords — in our society they can only be put in prison for very long periods of time. In other cases, assisting with decryption means forcing Apple to break through their own security (which was meant to keep even Apple out), or driving companies out of business unless they make products with weak security. This does not work in a world where a single individual can create an encryption app.
How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?
By doing anything other than mandating insecurity for everyone. The answer cannot be to make technology insecure enough for the state to exploit, because this makes everyone insecure, except for those who use good encryption (which has become too commonplace to stamp out).
The final two questions deal with data retention, a topic I’ll leave for a later time…
The CRTC recently concluded its differential pricing (or net neutrality) hearing. If you weren’t glued to CPAC earlier this month, you can check out the transcripts while we wait on the Commission’s decision. Like any regulatory issue before the CRTC, this one has a long history. The hearings included several mentions Canadian Gamers Organization’s complaint against throttling of certain types of traffic associated with gaming, and the related ITMP regime that developed out of complaints that certain peer-to-peer traffic (like BitTorrent) was being throttled. These cases clearly implicated net neutrality, because they involved ISPs treating some kinds of traffic differently than others, making certain applications perform worse. The CRTC took a dim view of this sort of discrimination unless ISPs could justify its necessity. For example, blocking ‘malicious’ traffic (like DDoS) is acceptable under the ITMP regime because the reasons are deemed valid, but torrenting shouldn’t be blocked just because it is sometimes used to infringe copyright. In an alternate world of net neutrality absolutism we might have ended up with a regulatory regime under which all traffic is protected, and ISPs are legally prohibited from mitigating the sorts of DDoS attacks that have been knocking many services offline in recent years. However, most net neutrality advocates would not support such an extreme interpretation. Under the existing regulatory regime, Canadian ISPs can intervene when they can justify the need, but are not generally allowed to give some kinds of traffic preferential treatment over others.
Most recently, the question has been whether the practice of pricing certain types of traffic differently than others amounts to a similar kind of discrimination. For this, we owe a debt to Ben Klass, whose 2013 complaint (while he was an MA student in Manitoba) got the ball rolling. Klass is one of a small (but growing) number of individuals who have participated in a regulatory process that was really designed to serve the institutions that are being regulated (the ISPs). His work is a great example of how a regulatory system that depends on parties coming forward with complaints fails, when the stakeholders (ISPs) who are meant to come forward don’t want to complain, even though the issue is of public policy importance. Differential pricing is clearly an important public policy debate to have, and the CRTC has recognized as much with the recent hearings.
While an ISP may treat traffic related to Netflix, YouTube, and CraveTV the same way, if two of these services count against a subscriber’s data cap while one does not, then that is a form of differential pricing (known as zero-rating). I may be able to watch Netflix or YouTube without buffering, but if an ISP makes Netflix zero-rated, I will end up paying more at the end of the month if I watch YouTube and exceed my cap. In this example, distinctions are being made about traffic passing through these networks, and they will presumably affect the behaviour of subscribers. The ethical dimensions of these discriminations become clear in situations where ISPs start favoring services in which they have an interest, or when money starts changing hands between companies so that ISPs treat certain applications more favorably than others. Instead of blocking content, an ISP might simply make it unaffordable, with roughly the same effect.
Many ISPs (large and small) have argued that these policies are not nefarious attempts to control subscriber behaviour, but are all about offering choice to consumers, and differentiating themselves from their competition. Some have continued to claim these discriminations are about managing network congestion (much like the old rationale for throttling BitTorrent), but this argument took a beating at the CRTC hearings and isn’t likely to be very convincing. There are good business reasons why you might want to offer customers different options, including unlimited use of a particular app. However, if an ISP is concerned about the amount bandwidth people are using, zero-rating certain services and imposing caps on the rest seems like a silly way to address the problem.
The CRTC’s forthcoming decision has to grapple with some tough questions, and some easy ones. Vertically-integrated companies using internet pricing to discriminate against competing services has analogies with common carriage in the railway/telegraph era, and feels like the sort of unjust discrimination the CRTC is meant to prevent. But if we are going to accept the existence of data caps (and not everyone agrees we should) then should it be a matter of principle to subject all traffic to the cap? If we can discriminate against malware, maybe we can discriminate in favor of security updates, by zero-rating them, or zero-rate access to essential government services. Without data caps, these become non-issues, but a world without caps would have its own issues (which wireless and satellite providers are well aware of).
It’s times like these I don’t envy the regulator’s job.
Finally, we should remember that differential pricing, just like interventions against malicious traffic, presumes monitoring to accurately distinguish different applications and data usage. The ISP does not need to know exactly what subscribers are doing online, but it needs to be able to tell when subscribers are using a zero-rated service. Unless the ISP is somehow relying on the app to provide this information, this means using DPI technology to inspect and categorize traffic. For better or worse, differential pricing is part of the process of intermediation, in which ISPs play a growing and more refined role in governing our digital flows.
We’re still in the middle of public consultations on what seems like every domain of policy for the federal government, and that includes cultural policy. Canadian culture has a fascinating history, particularly as seen through various efforts over the years to shape, manage, and protect it. Before the Second World War, English Canada’s cultural identity was lodged firmly in the British Empire, and efforts to shape culture were targeted at groups who didn’t fit the mold, like First Nations. During and after the war, the state became involved in creating a national identity and a distinctly Canadian national culture, independent of Britain, and often in opposition to the cultural threat posed by the media industries of the United States. A variety of institutions were directed to this task, including the NFB, CBC, Canada Council for the Arts, CRTC, and the complex set of agencies that administer what we might call the Canadian content (CanCon) regime (though calling it a regime might suggest more coherence than is actually the case).
By the time the twentieth century ended, and the internet was opened to the public, efforts to actively shape Canadian culture into some prescribed form had largely been abandoned. Instead of creating a particular national identity, or telling a national narrative, the concern shifted to supporting CanCon creators and ‘telling Canadian stories’, whether those stories were Trailer Park Boys or Anne of Green Gables. However, this meant that justifications for government’s role in promoting Canadian culture were often on fairly thin grounds — attracting or retaining cultural industries, or making sure the characters we saw in popular culture were ones we could relate to. In the 1990s, sweeping discussions of internet policy (such as the Information Highway Advisory Council reports) were still based on the assumptions of cultural protectionism — that we should find ways to promote and protect Canadian culture on the “information highway”, since international media flows were a threat to cultural sovereignty. In the end (and unsurprisingly given its composition), IHAC ended up divided on this topic, and didn’t support an ISP tax or any radical measures like a cultural firewall in its final report. Subsequent discussions of online cultural policy have been more limited, including debates over tariffs, copyright legislation or how the CRTC should classify over-the-top services like Netflix.
Online services have largely avoided being subject to CanCon regulation, and according to Canada’s pollsters, the country’s population isn’t keen to change this (and this is especially true for people under 35). We’ve gotten quite used to our tax-free Netflix, and extending the scope of internet regulation is not an easy sell. I think many Canadians have no idea this whole world of content regulation even exists — I certainly had no clue until I started volunteering at the campus radio station, and even then it’s not like anyone sat me down to explain the purpose of this system.
Canadian culture, like telecom, is a domain of public policy that is governed by several (sometimes overlapping) federal departments. The Department of Canadian Heritage is largely responsible for culture, but the CRTC regulates telecom and broadcasting in Canada, administering their respective acts. This puts the Commission in charge of two rather different sets of priorities for what is often the same media infrastructure, one of which includes cultural promotion and protection.
In an age when telecom meant telephones, and broadcasting meant television and radio, the two really did appear to be distinct categories, but that appearance has faded, and so there have been numerous calls to somehow revise or consolidate these mandates. As former CRTC Commissioner Denton writes, there is a contradiction between the two statues: “The Broadcasting Act says ‘go forth and discriminate in favour of Canadian programming’. The Telecommunications Act says ‘thou shalt not discriminate among signals except for very good reason'”. One ensures that intermediaries do not give preferential treatment to content without good reason, while the other sustains the privilege of a particular class of content — Canadian content (CanCon). Integrating cultural and telecom policy would be no easy feat, and bring about some dramatic changes depending on what we wanted to prioritize.
CanCon requirements were recently lessened and focused for broadcasters, and there has been no extension of these obligations for online providers. The spectre of such a move (presented by critics as a ‘Netflix tax’) is something politicians have generally fought against in this country rather than championed. In the last federal election Stephen Harper presented himself as just a regular, Netflix-loving dude, and also the only thing standing between Canadians and higher monthly bills.
Under the Harper government, it was clear that the CRTC was powerless to go after Netflix even if they had wanted to. When the company defied the Commission in 2014, all Chairman Blais could do was act upset and offended, with Netflix effectively calling his formal authority a bluff. The Liberals have yet to fulfil the Harper prophecy of regulating Netflix, but Canadian media has recently been open to public consultation (and some quasi-public discussions), with our Heritage Minister declaring that “everything is on the table“.
In general, the government might choose to extend Canadian cultural policy online, or it might pull back the CanCon regime. We might even see a bit of both, but definitely not extending to some sort of cultural firewall around a sovereign Canadian internet. The Government of Canada recognizes that “The way forward is not attempting to regulate content on the Internet“, and consultations are focused on how to support the production of Canadian culture in a “digital world”, with the real questions being who will benefit from this support, and how we are going to fund it.
Personally, I’m not opposed to public funding for culture, but I also don’t see it as a requirement in many cases. I value broadcasters like the CBC for helping me understand Canadian society, primarily through news and documentary rather than dramatic or comedy series (in that respect there isn’t much of CBC TV that I would be sad to see go, while there are still a number of good radio programs). Many kinds of art are not capital-intensive, and will be produced whether or not there are government programs in place. The fact that I came up in a thriving Canadian music culture operating largely independent of copyright or cultural funding no doubt shapes my thinking in this regard (a story I might share another time). The nightmare scenario for me is not about losing out to other cultural markets, fewer jobs in Canada’s cultural industries, or artists no longer being able to sustain careers as they once were. Culture is dynamic, and the most exciting forms come from below rather than top-down. Cultural protectionism makes the most sense if we think of culture as expensive mass media, individuals as consumers of culture, and U.S. cultural industries as a threat to Canada’s cultural sovereignty. But artists will make art everywhere, some of these artists will be Canadian, and we may or may not end up with some sense of Canadian identity as a result.
I am more worried about the possibility that living in an information-rich world will also mean being ignorant about local events, and no one being rewarded for answering the sorts of questions that powerful interests in this country would rather not hear asked. Perhaps a public broadcaster can be well-resourced and independent enough to play this role, but in an ideal world this wouldn’t just be the CBC’s responsibility. I rely on journalism and related media that tell me what is happening in the world in order to actively participate in democracy. I rely on it to do my job (which often involves classroom discussions of Canadian society). I can do without other kinds of CanCon.
I’m also in support of public funding for indigenous cultural programs. For most of Canada’s history the state has tried to eradicate indigenous culture, systematically resocializing children in residential schools, banning ceremonies, leaving behind broken communities and cultural dead-ends. The damage done by this cultural policy is hard to calculate but still ongoing. Its victims include young indigenous people who are unable to situate themselves in Canadian society because it does not speak to them, but also lack a cultural understanding of their own because it has been extinguished in previous generations. Some of this damage is irreversible, but in many cases knowledge, practices, culture can be recovered, preserved, and kept alive. The least a Canadian cultural policy could do is to try to address some of these wrongs and support efforts within First Nations communities to meet these cultural needs.
These are my opinions, so let the government know what you think before November 25. Otherwise, the voices that may be heard loudest are the ones that are most invested in the existing (and often declining) regime of cultural production.
I’ve been thinking about standards and telecom, or more specifically, the process of standardization. Recently, I read an article by Timmermans and Epstein that tries to advance a “Sociology of Standards and Standardization“. As the authors explain, there is a great deal of sociology that deals with standards in different domains, or as part of other processes, such as classification, quantification, and regulation. But “relatively few scholars analyze standards directly” (p. 74) — for instance by studying standardization as a social phenomenon.
Drawing significantly from Bowker and Star, Timmermans and Epstein define standardization as “a process of constructing uniformities across time and space, through the generation of agreed-upon rules… [making] things work together over distance or heterogeneous metrics” (p. 71). Standards can coordinate “people and things in ways that would be difficult to achieve on an ad hoc basis, they may allow communication between incompatible systems, and they may create specific kinds of mobility, uniformity, precision, objectivity, universality, and calculability” (p. 83). While we often aspire to (have) standards, the authors point out how standardization typically carries negative connotations of uniformity and “dull sameness” (p. 71). And yet, we only have to look to the internet to see the vast creativity and heterogeneity that has been enabled through standardization. Diverse networks, systems and devices can communicate with one another because they agree on basic standards and protocols. If the experiences we have through the internet are trending towards uniformity and sameness, this says more about the concentration of power in certain platforms, algorithms, and service providers than standardization.
Timmermans and Epstein’s article doesn’t discuss the internet, but scholars of internet governance have often focused on standards as the internet’s core. Inspired by Deleuze’s (1992) Postscript on the Societies of Control, Galloway’s (2004) Protocol grapples with the contradictions of internet standards being forms of power and control, while also facilitating autonomy, decentralization, and local decision-making. The protocols Galloway is interested in are the “standards governing the implementation of specific technologies” (p. 7), and he holds up DNS as the “most heroic of human projects” (p. 50). Galloway goes some way in advancing social theory on the basis of standards, painting a complex picture, but one in which the “full potential” (p. 122) of protocol is restricted, and channelled instead by law, government and corporate power. He ultimately envisions an even darker future where open-source standards and TCP/IP are replaced by something more proprietary, either under the control of states or a corporation (which in 2004 is naturally assumed to be Microsoft).
While corporate and state power over internet policy has intensified in the intervening years, and old principles like end-to-end architecture sound increasingly idealistic, the internet’s established standards-making bodies continue their work, and often do so in the open. In general, internet standards are voluntary, and internet protocols work because networks agree to use them. The IETF acts as a key standards-making organization, where individuals (sometimes employees of rival companies) collaborate to develop new proposals for improving how the internet operates. Galloway (2004, p. 122) paints a picture of a “technocratic elite [that] toils away, mostly voluntarily, in an effort to hammer out solutions to advancements in technology”.
The technocrats of the IETF toiling away
Anyone can participate in this “technocratic elite”, and at the IETF your membership is defined by your participation. But because of the technical understanding required, membership tends to be limited to a particular social class. Meaningful participation also requires a time commitment, and so bodies like the IETF often see the greatest participation from individuals with employers who are willing to support their activities. Organizations can benefit from being part of the standards-making process, but participation in the IETF is on an individual basis (with individuals often disclosing their organizational affiliations).
The IETF produces RFCs, but has no power to compel anyone to adopt these standards. Many are ignored, or (like IPv6, which Laura DeNardis wrote about in 2009) are only slowly implemented long after the problem they are meant to solve is well-known. So the actual making of standards is just one aspect of standardization, and achieving voluntary adoption is actually a bigger challenge, particularly when things seem to work ‘good enough’ as they are.
While government agencies have relatively little impact on internet standards, they do produce various kinds of standards for service providers operating within their territory. These may be voluntary ‘best practices’ or backed by regulatory law. Sometimes companies ‘voluntarily’ standardize their conduct, under threat of government regulation. Standardization can even be pursued through criminal law, as when previous Liberal and Conservative governments in Canada tried to pass lawful access legislation, standardizing surveillance and disclosure responsibilities for intermediaries. More recently, standardization has become a frequently-suggested means of addressing cyber security, but I’ll save these topics for a subsequent post.