Blocking Bad Traffic

Canadian internet policy is currently dominated by discussions of blocking – what to block and how. Blocking copyright infringement has been debated in a number of forms in recent years, was implemented in a limited sense to deal with GoldTV, and is now the subject of a new Government consultation (see here). Cabinet is also promising new measures to deal with illegal pornography, which may include new measures to block access (or building on existing ones, such as Cleanfeed). And then there is the CRTC’s proposal to mandate blocking cyber security threats – specifically botnets – though as a number of interveners have argued (including RCMP, CSE, M3AAWG) the focus on botnets is too narrow and should be expanded or kept flexible enough to adapt to future threats. Internationally the focus has been on how social media companies moderate their platforms, including blocking mis/disinformation and taking down accounts.

The common thread here is the understanding that some categories of internet traffic generate significant harm, and that there are centralized points of control where this traffic can be filtered out. As documented in Telecom Tensions (forthcoming May 2021), many of these debates are more than twenty years old (copyright owners have been arguing that ISPs should be responsible for infringing traffic since the mid-1990s), but they regularly re-emerge to focus on new methods, intermediaries, and targets. ISPs are often at the center of these controversies and participate as interested parties. Because they can exercise centralized control over their networks (DPI, DNS, BGP etc.), this is where filtering is deemed most effective (mandating end-user controls is also generally off-limits), and this means they are most likely to be saddled with new responsibilities and liabilities. There are already many existing processes in Canada for filtering at the ISP level (particularly when it comes to cyber security) and the question is often how to better coordinate and standardize what ISPs are doing on their own. Incumbents and government agencies have pointed to the work of CSTAC and CTCP in regard to the botnet-blocking proposal, and the work of the CCAICE goes back to the mid-2000s.

Network neutrality or the “open internet” is often implicated in these discussions, as blocking traffic generally contradicts the principle that users should be able to connect to whatever they want (or that ISPs should not discriminate among traffic). Those in favor of blocking will typically focus on the illegality of specific kinds of traffic as a justification, even where they may have significant financial interests in engaging in blocking (intellectual property, gambling revenues, bandwidth recovery). Opposition to blocking is less of an issue the more universally the traffic is recognized to be ‘bad’, which is why the earliest blocking regimes focused on child abuse imagery and spam. Botnets are an example where the users of ‘infected’ devices may not be aware that they are sending and receiving such traffic, and this can take us to questions of personal autonomy/responsibility for our devices (the Australian iCode system involved ISPs notifying users who were infected, placing the responsibility on users to take action). Even when there is broad agreement that a category of traffic is undesirable and should be stopped, there are always questions over effectiveness and overblocking (some bad traffic will get through, and some legitimate traffic will be blocked inadvertently). Finally, debates over internet filtering are inherently also about internet surveillance – since to discriminate between categories of traffic, ISPs and other actors need to monitor flows and have some sense of their contents.

Underlying all of this are two different view of what an ISP is and should be: whether our networks should be “dumb pipes”, limited to being just a conduit for packets as imagined in the end-to-end internet, or whether the pipes should be “smart” and exercise agency over traffic. The smart pipe’s “intelligence is rooted in its ability to inspect and discriminate between traffic, to decide what should be permitted, prioritized, or cleaned”. Clearly, the current rounds of proposals envision even broader roles for intermediaries in governing traffic — what I describe as the process of intermediation. However, the specifics in each case matter, involving different business interests and public consequences (including unintended consequences). Any action needs to be informed by an understanding of the actors and processes already in place to deal with bad traffic, previous experience with filtering elsewhere, and the harms and risks of traffic discrimination. We need to be mindful of the fact that some actors will always try to mandate blocking in order to safeguard their narrow interests. Political actors will also recurrently feel the need to ‘do something’ about internet threats, and imposing additional surveillance and filtering responsibilities on intermediaries often seems like the easiest solution. Policies for copyright filtering, pornography, and child abuse imagery have a long history in internet governance, but cyber security remains in many ways a frontier of internet policy, and one worth watching closely given the expansive possibilities for what might be considered a cyber threat in the future.