Blog

Canada’s cyber security and the changing threat landscape

My article, Canada’s cyber security and the changing threat landscape has just been published online by Critical Studies on Security.

Broadly, it grapples with what cyber security has come to mean in the Canadian context. The article deals partly with Canada’s Cyber Security Strategy, the operations of the Canadian Cyber Incident Response Centre (CCIRC) between 2011 and 2013 (a time of great concern over hacktivism [Anonymous] and Advanced Persistent Threats [China]), and what we can say about Canada’s cyber security orientation in the “post-Snowden era”. It is based on publicly-available texts and several years of Access to Information requests (the requests were informal, for documents already released to other people, giving me several thousand pages to work with).

What is cyber security, and why should we care?

Cyber security emerged from a narrow set of concerns around safeguarding information and networks, but in recent years it has become intimately tied to foreign and domestic political objectives. This means that cyber security cannot be defined and delimited in the same way as the field of information security (as protecting the confidentiality, integrity, and availability of information). Instead, cyber security is a collective endeavor, typically tied to the larger project of national security, but also encompassing a broader set of social and ethical concerns. This is why hateful messages sent by teens are now treated as a cyber security problem, while Canada’s government fails to acknowledge the international cyber threat posed by its foreign allies.

One of the key effects of cyber security strategies and classifications is that they specify the boundaries of what is to be secured. As the line between ‘cyber’ and ‘non-cyber’ continues to blur, the scope of cyber security’s concerns can expand to cover new kinds of threats. If it is true, as the opening of Canada’s Cyber Security Strategy 2010 declares, that our “personal and professional lives have gone digital”, that we now “live, work, and play in cyberspace”, then cyberspace is not just a new domain to be secured, but a fundamental part of our lived reality. This means that it is now possible to conceive of cyber threats as existential threats of the highest order, but also that the project of cyber security will have deepening implications for our daily lives. Some of these implications can only be discussed by referencing the work of security professionals – work which typically takes place out of public view.

Operational and Technocratic Discourse

My article began as a work of discourse analysis, but over time I turned increasingly to international relations (IR) and what has been called the “Paris School” of security studies. I found that previous analyses of cyber security discourse, influenced by the Copenhagen School, focused largely on public discourse, and how political actors work to get cyber security on the political agenda (as a response to new, existential threats). The Paris School meanwhile, emphasizes that new security issues can arise and be defined in the hidden world of security professionals and their technocratic practices. The volumes of internal threat reports, alerts, and government emails accessible through Access to Information became a rich source for this technocratic and operational discourse, providing a sense of how the moving parts of cyber security fit together in practice.

Hacktivism

Hacktivism is an interesting threat category to consider because, at least in Canada, it has never been subject to visible politicization. Unlike cyberbullying, no new laws have been proposed to deal with hacktivists, and public officials have avoided referencing the threat in their public proclamations. The Government seems more willing to deal with hacktivism quietly than to engage in a public fight against Anonymous, or to publicly condemn tactics that some see as a legitimate form of protest.

Nevertheless, hacktivism has become a major preoccupation for Canadian security agencies, as evident through volumes of operational discourse, including detailed reports and responses to hacktivist campaigns. Where cyberbullying can be reduced to a problem of ethical conduct, common forms of hacktivism such as DDoS reduce to a technical problem. A DDoS attack becomes hacktivism by virtue of its political motivation, and not its methods. While DDoS actions have typically been handled by CCIRC and CTEC as individual incidents, the operational threat category of hacktivism makes these events legible as part of a larger and pathological social trend, and the growing concern with hacktivism since 2010 indicates cyber security’s opposition to disruptive forms of online activism and politically-motivated hacking.

Advanced Persistent Threats (APTs)

As actors define and redefine cyber security’s terminology, they produce new conceptions, repurpose old ones, and experiment with metaphors. Sometimes, a term becomes a prolific ‘buzzword’, securing regular usage in cyber security discourse, and also inevitably becoming a point of contention. One of the best recent examples is the Advanced Persistent Threat (APT). This is the threat category that best represents cyber security’s oblique treatment of international affairs and the new strategic stakes of cyber security. Where hacktivism is the intersection of cyber security and protest in operational discourse, APTs bring cyber security into opposition against state actors. The term usually refers to a well-resourced threat actor willing to devote considerable effort to compromise a particular target, and is often understood to mean a state-backed attacker – sometimes becoming simply a shorthand for “China”.

In tracing the emergence and proliferation of this new threat category, it is possible to get some sense of the multiple constituents and channels of cyber security discourse. In this case, a category emerged in the operational discourse of the US military, spread rapidly through the North American security industry, and was adopted for internal use by CCIRC in the aftermath of a major security breach in 2011. Along the way it was used to classify a growing number of intrusions and data breaches, sell security products and services, and make intelligible a world of online geopolitical contestation. APTs could be invoked to specify a threat, while eliding the attribution problem and preserving nominal ambiguity in the international political arena. For CCIRC, APTs became an operational threat category at a time when Chinese hackers were widely suspected of compromising Canadian government systems, and the term proliferated into public discourse through Mandiant’s reporting of Chinese cyber espionage in 2013. Not long after, the Snowden disclosures had a dramatic impact on how we understand and talk about cyber security.

After Snowden

One of the most important revelations of the Snowden documents has been that the project of cyber security (at least as interpreted by signals intelligence agencies like NSA, GCHQ and CSE) can include compromising the very digital infrastructure it is tasked to protect. Domestic cyber security programs can become an “advanced persistent threat” – a term once reserved for foreign hackers. Given these developments, it is worthwhile to reflect on how the governmental project of cyber security has evolved in recent years, and what cyber security has come to mean. This is particularly important in Canada, a country closely implicated in US cyber security efforts, but where post-Snowden commentary has made comparatively little impact.

The lack of visible concern by Canada’s government about the security threat posed by its closest allies (a threat that Canada has apparently facilitated), speaks to how foreign policy shapes the nation’s cyber security priorities. It also sends the dangerous message that while Canada is unable to clearly define a vision of what it is trying to secure, cyber security is somehow compatible with pervasive surveillance and widespread hacking.

State cyber security agencies work to guard us from new threats, but seem blind to the possibility that they or their partners might also threaten our security. To paraphrase Google’s chairman, an attack is an attack, whether it comes from China or the NSA. For Canada’s CSE and the other Five Eyes members, the equivalence may not be as clear. If cyber security is subordinated to national security interests and compatible with government hacking, then threats will continue to be defined very differently by those inside and outside government. In addition to a broadening scope for cyber security’s concerns, the current trend is one of growing division between government cyber security efforts and more clearly circumscribed approaches to information security by private companies and civil society.

The idea that cyber security can be compatible with hacking domestic companies and maintaining vulnerabilities in commonly-used technologies might be seen as a continuation of the exceptional measures justified by 9/11. But more fundamentally, it reflects the technocratic imperatives of agencies tasked with gaining and maintaining access to communications infrastructure. The Five Eyes’ objectives go far beyond countering terrorism, and surreptitious access to communications infrastructure is increasingly part of the larger cyber security project. This dangerous vision of cyber security has evolved in secret, establishing procedures for who can be targeted, what can be collected, and where compromising security might help to make us safer. We did not learn of these measures through visible political discourse or securitizing rhetoric (the traditional focus of the Copenhagen School), but through operational documents and presentation slides from closed meetings of security professionals.

Measuring Canada’s Internet

For most people, internet performance is a mystery. Many subscribers do not even know the level of bandwidth they are paying for, let alone how to test if they are actually receiving the sorts of speeds their ISP advertises. Canadian regulators have often been in the dark as well, which is a problem when their decisions are supposed to take the availability and geographic distribution of broadband into account.

Regulators have traditionally depended on information provided by industry as a basis for policy decisions, but this information can be inaccurate or incomplete. There are ample cases in the US and Canada where certain regions have been listed as having access to a certain level of broadband, or choice of ISPs, whereas the reality on the ground has been far less than what is supposedly available. This problem is not unknown to regulators. Network BC, working with Industry Canada and the CRTC, launched its broadband mapping initiative in 2014. This included consultations with the various ISPs spread across the province to determine what services where actually available in what locations, resulting in an interactive connectivity map. Industry Canada watched the efforts in BC closely, and is currently soliciting information from ISPs to carry out a national broadband availability mapping project. However, such efforts to not include any independent means of actually measuring internet performance in these areas.

Up until now, the go-to place for Canadian internet performance surveys that utilize a third-party service (that don’t on ISPs for information) has been averages of Ookla’s speedtest.net (see here and here), which is the same service typically used by individuals to see how their internet connections measure up. But the results are not really meant to be a basis for policy decisions, since the averages are not pulled from a representative sample, and the (mean) speeds are often higher than what is available to a “typical” internet subscriber,

The big news in recent weeks has been the entry of new players in the internet metrics game. First, CIRA kicked off its own broadband mapping effort, which anyone can participate in and provide information to (an appropriate browser/OS combo may be required to participate). The map is very much a work-in-progress, which will fill out as individuals add more data points, and as new features and methods are added. Not long after, the CRTC announced its own internet measuring initiative. This is new territory for the CRTC, which has never had much of an ability to independently investigate or collect data about the telecom industry it regulates. However, the plan has been in the works since at least 2013, and may be based on the FCC’s Measuring Broadband America project, which has been underway since 2011. As in the US (along with Europe, Brazil, Singapore, and other nations), the CRTC’s program depends on the use of the SamKnows “whiteboxes” deployed at participating locations (the CRTC is currently looking for volunteers to receive and set up the devices). These devices measure connectivity between the subscriber’s premises and major connection points between ISPs.

There are a number of concerns (see here and here) with the CRTC’s efforts. ISPs could try to “game” the metrics to make their network’s performance appear better (ISPs know which of their subscribers have the boxes, since they use this information to make sure the testing doesn’t contribute to a subscriber’s data cap). SamKnows might only measure internet performance in off-peak hours, when connectivity is less likely to be a problem, since the boxes are intended to operate when subscribers aren’t making full use of their bandwidth (on another page, the CRTC has gone even farther to say the information will be gathered “when users are not connected”). Not all ISPs are participating the program, raising the concern that smaller players and rural areas that are most disadvantaged in terms of connectivity are being left out. This last point relates to the importance of having a representative sample, which is a fundamental precondition for any survey that attempts to calculate meaningful (or generalizable) statistics. All of the above can be addressed with a properly designed methodology, full transparency of these methods, and careful qualification of the results. Here, the CRTC has plenty of international examples to draw from, and SamKnows has built its business around such openness, but we will have to wait for more details to weigh in on whether this particular partnership has done a good job.

Finally, it is important to realize that no test can ever truly gauge the speed of “the internet” from a given location. Typically, the best that can be achieved is a measurement from a subscriber’s home to a “major internet gateway”, where an ISP connects to the rest of the world. The ISP has no control over how fast the rest of the world’s internet is, and limited control over the performance of services that aren’t hosted on its network. Even the fastest gigabit networks are no faster than their connections to services “upstream,” like Netflix – a problem the FCC had to contend with as it tried to measure the performance of ISPs that were engaged in peering disputes that limited their connections to the streaming service.

Ultimately, all of this indicates a broader trend towards data gathering to erase some of the mystery about how the internet actually “performs”. For individuals, these are welcome steps towards becoming better informed about what one’s ISP actually provides, but also about what goes into determining internet speed or performance in the first place. For regulators, accurate and comprehensive information is a precondition for effective public policy, and it’s great to see Industry Canada and the CRTC taking steps to refine the picture they have of Canadian connectivity as they come to decide important questions about the future of Canada’s internet.

Positive and Negative Responsibilities for Internet Intermediaries

I’m interested in the responsibilities of various “internet intermediaries”. These might be internet service providers (ISPs), online service providers (like Google or Netflix), or increasingly, some combination of the two functions under the same organizational umbrella.

Regulations require these intermediaries to do certain things and avoid doing others. Child pornography or material that infringes copyright must be taken down, but personal communications or online behaviours cannot be tracked without consent and a valid reason. Certain protocols might be throttled where necessary for “network management”, but otherwise ISPs should not discriminate between packets. It strikes me that these responsibilities – duties to intervene and duties not to intervene – can be likened to the idea of positive and negative rights or duties in philosophy, where positive rights oblige action, and negative rights oblige inaction.

If notified of the presence of illicit content, a host must take action or face sanctions. This is a positive responsibility to intervene given certain conditions. Privacy protections and net-neutrality regulations are often negative responsibilities, in that they prevent the intermediary from monitoring, collecting, or discriminating between data flows.

However, as with positive and negative rights, it is not always easy to tease the two apart. Negative responsibilities can have a positive component, and the two are often bundled together. For example, the positive duty to install a court-ordered wiretap is typically tied to the negative duty of not informing the wiretap’s target. Non-discrimination is a negative responsibility, but US ISPs have been accused of discriminating against Netflix by not upgrading links to handle the traffic coming from the video services. Under this logic, an ISP has a positive responsibility to ensure its customers have adequate access to Netflix. Anything less amounts to discrimination against Netflix. In Canada, ISPs also have a negative responsibility not to discriminate against video services like Netflix, particularly since Netflix competes with incumbent ISPs’ own video offerings. However, the Canadian regulatory regime seems to be headed towards imposing the positive responsibility on these ISPs to make their own video services available through other providers under equal terms, under the reasoning that equal treatment and exclusivity cannot coexist.

I think the distinction between positive and negative responsibilities can be useful, particularly since the majority of the academic literature about internet intermediaries has emphasized their positive responsibilities. There has been less discussion of all the things that intermediaries could be doing with our traffic and data, but which they choose not to, or are constrained from doing.

On Cyberspace

When William Gibson coined “cyberspace” in the early 1980s, he was primarily interested in coming up with an exciting setting for science fiction, and one with a cool-sounding name. As he has told the story in numerous interviews, Gibson came across a Vancouver arcade one day and was struck by the intensity with which the gamers engaged with the screen, leaning ever closer as if they were trying to push through it to a world on the other side. He wanted to imagine what that world was like – to explore the “notional space” inside the computer. These days, Gibson has mixed feelings about the term he coined. In 2007 he was reported announcing the demise of ‘cyber’ talk, and has joined many others pointing out how unhelpful it was to think about cyberspace as some separate, virtual realm.

And yet, cyber talk keeps proliferating. Cyberspace has become a bloated, rudderless place-holder of a word. It means less and less every day, as it expands to encompass more and more. As the world fills up with networked computers, cyberspace is suddenly everywhere. Militaries have started slapping the ‘cyber’ label onto practices that fifty years ago had other names, like signals intelligence and electronic warfare. Now these are all ‘cyber operations’ and the domain of operations is cyberspace. In 2010 Canada’s government put forward a rather 1980s Gibsonian definition of cyberspace, and went about trying to secure it.

William Gibson is not the only one trying to helpfully remind people that cyberspace does not actually exist – that this is a word he invented to fill a storytelling need, which then took on a life of its own. Other writers have also been trying to get past the virtual, and point to the material. In the 1990s and 2000s, ‘cyber-utopians’ imagined they would have the freedom to build a new world in cyberspace. Some still do, but a realist backlash (of which Evgeny Morozov is the prime example) has reminded us that utopias can be dangerous, and that cyberspace is not somewhere we can go to escape power and exploitation. Our networks are material; they exist in governed territories; they must contend with states and other sovereigns.

My ongoing work is certainly an attempt to help ground internet studies in a material dimension, but I am struck by a vision similar to what Gibson saw in those kids in that arcade. These days, if you want to see someone getting immersed in a screen, you can likely just look across the room or out the window. Few of us imagine that we are somehow ‘in cyberspace’ when we hold the screen up to our face, and yet there is a world behind that screen. This world is largely invisible, sometimes secret, and usually hard to understand. It is a world of cables and switches, companies handing packets to one another on privately-agreed terms, while regulators and assorted security agents work to produce some sort of order.

Like a bad hangover from the 1980s and 90s, cyberspace persists in jargon and a great deal of government and academic discourse. One of the reasons is the difficulty of finding an adequate catch-all replacement. ‘The internet’ can be even more nebulous than cyberspace, and ‘online’ tends to be used as an adjective. At the present moment, it is more helpful to turn away from talk of virtual worlds, and focus on the material one we all have to contend with.