Lawful Access Consultation 2016

Another federal government consultation has recently wrapped up, this time with Public Safety asking about national security. Like other ongoing consultations, this one was criticized (for example, by Christopher Parsons and  Tamir Israel) as framing the policy issue in a way that the government prefers, and trying to legitimate some ideas that should have been discredited by now. I would say that the consultation framed the issue very much as Public Safety (for instance, the RCMP) would prefer, repeating old rationales, and seeing the world from a perspective where the ability to exercise sovereign will over information flows is paramount. The Green Paper provided for background reading foregrounds the concerns of law enforcement & security agencies, is peppered with the words “must” and “should”, advancing some dubious assumptions. Public Safety asked for feedback on terrorism-related provisions (including C-51), oversight, intelligence as evidence, and lawful access. The last of these has seen a number of previous consultations, but is back in the news as police make their case for the issue of “going dark” (which has become part of the RCMP’s “new public narrative” for a set of concerns that were once broadly talked about as lawful access).

I let this one get away from me, so I didn’t have anything ready for Dec. 15 when the online submission closed. Regardless, I’ve decided to complete most of the questions related to the topic of Investigative Capabilities in a Digital World as a blog post. I don’t feel particularly bad for missing the deadline, since several of these questions border on ridiculous. For a true public consultation on what has long been a very contentious issue, it would be important for the questions to be informed by the arguments on both sides. Privacy experts would have asked very different questions about privacy and state power, and on a number of topics Public Safety seems to be trying to avoid mentioning the specific policies that are at stake here.

How can the Government address challenges to law enforcement and national security investigations posed by the evolving technological landscape in a manner that is consistent with Canadian values, including respect for privacy, provision of security and the protection of economic interests?

When I think of Canadian values, “privacy, provision of security and the protection of economic interests” are not what come to mind. When I ask my students what they associate with Canada, these particular values have never come up in an answer. I think we should consider democracy as a fundamental value, and understand that state secrecy is antithetical to democracy. When it comes to the relationship between citizens and the state, Canadian values are enshrined in the Charter, and the Supreme Court is ultimately responsible for interpreting what is consistent with the Charter. Therefore, Canadians deserve to understand what is being done in their name if we are to have a meaningful democracy, and this includes the existence of an informed, independent judiciary to decide what government actions are consistent with Canadian values.

In the physical world, if the police obtain a search warrant from a judge to enter your home to conduct an investigation, they are authorized to access your home. Should investigative agencies operate any differently in the digital world?

If we accept the digital/physical distinction, the answer is a definite yes — investigations carried out today operate differently than they did in the simpler, more “physical” 1980s. But it is important to keep in mind that analogies between the digital and physical environment can be misleading and dangerous. When it comes to the “digital world”, I prefer to talk about it in digital terms. The stakes are different, as are the meaning of terms like “to enter”. If we must make these comparisons, here is what treating these two “worlds” as analogous would mean:
The police can enter my home with authorization, and seize my computer with authorization. I am not required to make my computer insecure enough for the police to easily access, just as I am not required to keep my home insecure enough for the police to easily access. I am not required to help the police with a search of my home, and so I should not be required to help police search my computer. If I have a safe with a combination lock in my home, I cannot be compelled by police to divulge the combination, so by analogy, I should not be compelled to divulge a password for an encrypted disk.

But analogies can only take us so far. A computer is not a home. Metadata is not like the address on a physical envelope. We need to understand digital information in its own terms. To that end, some of the more specific questions found further in this consultation can produce more helpful answers. Before we get to these however, this consultation requires me to answer a couple more questions based on the presumption of digital dualism.

This question is hard to answer without knowing what it means to “update these tools”, and seems to be intended to produce a “yes” response to a vague statement. Once again, digital/physical comparisons confuse more than they clarify — these are not separate worlds when we are talking about production orders and mandating the installation of hardware. We can talk about these topics in their own terms, and take up these topics one at a time (see further below).

If we could only get at the bad guys in the digital world, but there's all this code in the way!
If we could only get at the bad guys in the digital world, but there’s all this code in the way!

Is your expectation of privacy different in the digital world than in the physical world?

My answer to this question has to be both yes and no.

No, because I fundamentally reject the notion that these are separate worlds. I do not somehow enter the “digital world” when I check my phone messages, or when I interact with the many digitally-networked physical devices that are part of my lived reality. Privacy law should not be based on trying to find a digital equivalent for the trunk of a car, because no such thing exists.

Yes, expectations of privacy differ when it comes to “informational privacy” (the language of Spencer), because the privacy implications of digital information need to be considered in their own terms. Governments and public servants do Canadians a disservice with phonebook analogies, license plate analogies, or when they hold up envelopes to explain how unconcerned we should be about government access to metadata (all recurring arguments in the surveillance/privacy debate). In many cases, the privacy implications of access to digital information are much more significant than anything we could imagine in a world without digital networks and databases of our digital records.

Basic Subscriber Information (BSI)

 

As the Green Paper states, nothing in the Spencer decision prevents access to BSI in emergencies, so throwing exigent circumstances into the question confuses the issue, and once again seems designed to elicit a particular response that would be favorable to police and security agencies. In the other examples, “timely and efficient” is the problem. Agencies understandably want quicker and easier access to personal information. The Spencer decision has made this access more difficult, but any new law would still ultimately have to contend with Spencer. Government, police, and security agencies seem to be in a state of denial over this, but barring another Supreme Court decision there is no going back to a world where the disclosure of “basic” metadata avoids section 8 of the Charter, or where private companies can voluntarily hand over various kinds of personal information to police without fear of liability.
If the process of getting a court order is more onerous than police would like, because it would be easier to carry out preliminary investigations under a lesser standard, it is not the job of government to find ways to circumvent the courts. If the process takes too long, there are ways to grant the police or the courts more resources to make it more efficient.
There are ways to improve the ability of police to access metadata without violating the Charter, but any changes to the existing disclosure regime need to be accompanied by robust accountability mechanisms. Previous lawful access legislation (Bill C-30) was flawed, but it at least included such accountability measures. In their absence, we only know that in a pre-Spencer world, police and government agencies sought access to Canadian personal information well over a million times a year without a court order, and that a single court order can lead to the secret disclosure of personal information about thousands of Canadians. Police and security agencies have consistently advocated for these powers, but failed to document and disclose how they actually use them. This needs to change, and the fear of disclosing investigative techniques cannot be used to prevent an informed discussion about the appropriateness of these techniques in a democratic society.
Do you consider your basic identifying information identified through BSI (such as name, home address, phone number and email address) to be as private as the contents of your emails? your personal diary? your financial records? your medical records? Why or why not? 
The answer to this question depends on an exhaustive list of what counts as BSI. It is important to have a clear definition of what counts as BSI, because otherwise we might be back in the pre-Spencer postion where police are able to gain warantless access to somebody’s password using powers that were meant for “basic identifying information”.
The answer to this question also depends on an explanation of what is done with this “basic” information. As was recognized in Spencer, we can no longer consider the privacy impact of a piece of personal information in isolation. This is how lawful access advocates prefer to frame the question, but this is not how investigations work in practice. BSI is useful only in combination with other information, and if we are talking about metadata (a term that curiously, never appears in the Green Paper) it is now increasingly-understood that metadata can be far more revealing than the content of a personal communication, when it is used identify people in large datasets, determine relationships between individuals, and patterns of life.
So in short, yes — I am very concerned about BSI disclosures, particularly when I don’t know what counts as BSI, and what is being done with this information.
Do you see a difference between the police having access to your name, home address and phone number, and the police having access to your Internet address, such as your IP address or email address?
I see an enormous difference. As previously discussed, these are not analogous. An IP address is not where you “live” on the internet — it is an identifier that marks interactions carried out through a specific device.

Interception Capability

This is not a question… Yes all of this is true.
Should Canada’s laws help to ensure that consistent interception capabilities are available through domestic communications service provider networks when a court order authorizing interception is granted by the courts?
The key word here is “consistent”, and the question of what standard will be required. It would be very easy for government to impose a standard that large telecom incumbents could meet, but which would be impossible for smaller intermediaries. As things are, the incumbents handle the vast majority of court orders, so I would love to see some recent statistics on problems with ‘less consistent’ intermediaries, particularly if this is a law that might put them out of business.

Encryption

I think the answer to this has to be never. People cannot be forced to divulge their passwords — in our society they can only be put in prison for very long periods of time. In other cases, assisting with decryption means forcing Apple to break through their own security (which was meant to keep even Apple out), or driving companies out of business unless they make products with weak security. This does not work in a world where a single individual can create an encryption app.

How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?

By doing anything other than mandating insecurity for everyone. The answer cannot be to make technology insecure enough for the state to exploit, because this makes everyone insecure, except for those who use good encryption (which has become too commonplace to stamp out).

 

The final two questions deal with data retention, a topic I’ll leave for a later time…

Telecom Companies as Privacy Custodians (Rogers and Telus tower dumps)

Yesterday, Justice Sproat of the Ontario Superior Court released a decision in a case involving Rogers, TELUS, and the Peel Regional Police. Back in 2014, the police force had requested “tower dump” data from these companies in order to identify some robbery suspects. The orders were so broad (the broadest ever, to the knowledge of the TELUS deponent) that the telecom companies opposed them in court. Despite the fact that the production orders were then withdrawn by police, the judge heard the case anyhow, and was able to offer guidance for police and telecom companies dealing with similar cases in the future.

David Fraser has provided a legal analysis of the decision, which found that “the Production Orders were overly broad and that they infringed s. 8 of the Charter” [42]. For me the most interesting aspects are what this decision tells us about the roles and responsibilities of intermediaries as privacy custodians. The decision states (on the issue of whether the companies have standing in the case) that Rogers and TELUS “are contractually obligated” to “assert the privacy interests of their subscribers” [38]. That is to say, the relationship these companies have with their customers creates obligations to protect subscriber information, and this protection includes defending subscribers against unconstitutional court orders. It is not reasonable to expect individual subscribers to defend their privacy interests in such cases — the intermediary should stand between the individual and the state as a privacy custodian (and this means making determinations about which police requests and court orders are unconstitutional).

Also of particular interest is the judge’s recommendation that police should request “a report based on specified data instead of a request for the underlying data itself”, unless this “underlying data” is required for some reason [65]. This means that instead of asking companies such as Rogers and TELUS for the personal information of tens of thousands of subscribers, so that the police can determine which subscribers to investigate further (presumably those in the proximity of more than one crime scene), the telecom companies could do this work themselves, and disclose only the information of subscribers that meet particular criteria. In effect, this type of practice would require and entrust intermediaries to do as much of the initial investigatory work as possible, handing over only the information that police need to proceed further. This particular guideline is meant to limit the privacy impact of such disclosures, since the judge notes that personal information in the hands of police can be vulnerable to being “hacked” [20], and that police in possession of such data are not subject to conditions on data retention [59-60].

For me, the unanswered question is: why Rogers and TELUS? There are larger players than TELUS in Ontario, but this is a company that has pushed back before against such overreach. If the police had no idea who the suspects or their mobile providers were, did they obtain production orders for all mobile providers, and only Rogers and TELUS pushed back? If so, did other companies fail their customers as privacy custodians by not opposing such orders?

Copyright trolls and online identification

My previous post dealt with copyright surveillance and algorithmic judgement, and here I want to focus on a particular kind of copyright surveillance and enforcement that has achieved a special sort of notoriety in recent years: copyright trolling.

Some of this is based on my most recent article, The Copyright Surveillance Industry, which appears in the open-access journal Media and Communication. I’m  also working on a future piece that deals with copyright enforcement, privacy, and how IP addresses and persons become linked.

Why this matters

First, copyright trolling is having an enormous impact, with hundreds of thousands of defendants named in US and German lawsuits in just a few years. Precedent-setting cases in other countries (such as Australia and Canada) have been determining whether this practice (sometimes called “speculative invoicing”) can spread into new jurisdictions. Some legal scholars have described copyright trolling as a “blight“, an abuse of the legal system, or a kind of “legal ransom“. Defendants must choose whether to pay what the troll demands, or face the prospect of an expensive (and sometimes embarrassing) legal fight. Balganesh makes a strong argument that this exploitative, profit-based use of the legal system disrupts the traditional “equilibrium” of copyright’s underenforcement.

Studying copyright trolling cases can also help us come to terms with the question of personal identification and attribution on the internet – what it means to connect traces of online activity to human bodies and the devices with which they interact. The thorny question of how to link persons to digital flows has been a topic of intense interest for a variety of surveillance institutions, including advertisers and intelligence agencies. Legal institutions around the world have been struggling with related questions in trying to assign responsibility for data communicated over the internet. Copyright trolling is just one example of this problem, but it’s one that is currently playing out in a number of countries on a massive scale.

What is a copyright troll?

Copyright trolls are the products of contemporary copyright regimes, internet technologies, and creative legal entrepreneurs. No one self-identifies as a troll, so the label is pejorative, and used to criticise certain kinds of copyright plaintiffs.

The term is derived from “patent trolls”: patent-owning entities that demand payments from companies allegedly infringing their patents. Like patent trolls, copyright trolls demand payments following alleged infringement of copyright. The difference is that a typical patent troll does not produce anything of value, and simply generates income through settlements and lawsuits. While the term “copyright troll” is usually reserved for law firms engaging in “trollish” practices, these firms represent copyright owners that do produce creative work for sale. It is typically the law firms that drive trolling practices. Some reserve the term “troll” strictly to describe those legal firms that acquire the ability to sue from copyright owners under certain terms (namely, to pass along a percentage of any settlements received to the copyright owner). The law firms can then exercise their copyright enforcement power autonomously.

The line between what is and is not a troll is more difficult to draw in copyright than patent law, since the law firms involved can point to a legitimate business that they are protecting and particular works being “pirated”. This has not stopped a number of authors from trying to come up with a workable way of delineating trolls from other plaintiffs, but these definitions end up encompassing only a particular slice of trolling operations (given their variability and opportunistic adaptability). There are varying degrees of autonomy that trolling law firms exercise: while some have a free hand in pursuing their legal strategies, others take direction from copyright owners. Because of this, I avoid labelling any specific companies as copyright trolls. Instead (and largely in agreement with Sag, 2014), I refer to copyright trolling as a practice – one that threatens large numbers of individuals with copyright infringement claims, with the primary goal of profiting from settlements rather than proceeding to trial on the merits of a case (see Curran, 2013, p. 172).

How copyright trolling works

In theory, copyright trolling can develop wherever a copyright owner stands to profit from initiating lawsuits against alleged infringers. The now-infamous Righthaven attempted to build its business model around suing people who were sharing news articles. Currently, Canadian government lawyers are accusing Blacklock’s Reporter of being a copyright troll, after the site filed suit against several departments and agencies for unauthorized sharing of the site’s articles. My focus here will be on the most common form of copyright trolling — suing people accused of file-sharing copyrighted works. Because the defendants in these cases are listed as “Does” until identified, and plaintiffs typically file suit against multiple (sometimes hundreds or thousands) of defendants at once, these cases can be called Multi-defendant John/Jane Doe Lawsuits. They begin with the collection of IP addresses tied to alleged infringement, proceed to the identification of internet subscribers assigned those IP addresses (discovery), and conclude with claims made against these subscribers in the hope of reaching settlements or (if defendants do not respond) default judgements.

A copyright surveillance company is used to monitor file-sharing networks (principally BitTorrent), where IP addresses of those engaged in file-sharing can be recorded. Just as the activities and IP addresses of downloaders and uploaders are largely visible on BitTorrent, so are the activities of copyright surveillance companies. This is because collecting information on file-sharing cannot be achieved without some level of interaction: connections need to be established with file-sharers so that their IP addresses can be recorded. Once a copyright surveillance company has collected the IP addresses involved in sharing a particular file, it hands them over to a law firm. While there are allegations that a particular German-based copyright surveillance company has been the driving force behind many US copyright trolling cases, typically the surveillance company exits the picture once IP addresses have been collected.

The next step is to identify the persons “behind” these IP addresses, and the only way to make this link is through the cooperation or forced compliance of an ISP. Since blocks of IP addresses are assigned to particular ISPs, a law firm can determine which ISPs’ customers to pursue by checking their list of recorded IP addresses. Copyright trolls have to be selective, targeting particular ISPs on the basis of geography (jurisdiction) or other factors. ISPs vary in their levels of cooperation with copyright owners that seek to identify allegedly infringing subscribers. In some cases it has been possible to get an ISP to forward a settlement letter without disclosing the identity of the subscriber (for instance, by abusing Canada’s notice-and-notice system), but in general the troll must obtain a court order for the ISP to identify its subscribers. In the UK and Canada, a court order used in a lawsuit to compel information from a third party like an ISP is known as a Norwich order. In the US, courts can issue subpoenas for ISP records.

It is this “discovery phase” of a lawsuit that has generated the most public information about how copyright trolling operates, since as previously mentioned, the plaintiffs in these cases generally avoid proceeding to trial. Instead, they use the legal system to identify individuals who can credibly be threatened by a large penalty if they do not settle an infringement claim. ISPs are effectively caught between the plaintiff and the alleged infringers during the discovery phase, and can behave in a number of different ways. In the US, Verizon has recently opposed a particularly burdensome subpoena from Malibu Media. In Australia, a group of ISPs have jointly opposed efforts to identify thousands of their subscribers in a precedent-setting case that continues to unfold. In Canada, Bell, Videotron and Cogeco complied with a court order to identify subscribers in 2012, but TekSavvy took a different approach in a subsequent case involving the same copyright owner — Voltage Pictures. TekSavvy claimed it could not oppose the motion to identify its subscribers (an argument disputed by Knopf), but it did go further than the Canadian incumbents in the previous case, and CIPPIC was granted intervenor status to argue against disclosure and for the privacy interests of subscribers.

Once IP addresses have been linked to subscriber names and addresses, the trolling operation can begin collecting settlements from defendants. Subscribers who ignore the copyright owner’s demands may end up subject to a default judgement, and those who protest their innocence may end up in a lengthy back-and-forth with lawyers, which in the US has included forensic examination of computers and polygraph tests.

IP addresses

In copyright trolling, the main challenge is linking IP addresses to corresponding subscriber information, which often requires a court order. But once this link is made, what does it mean? Is it evidence that the subscriber infringed copyright?

In criminal internet investigations (such as child pornography), IP addresses are only ever used as supporting evidence. IP addresses do not identify people, but they do become a crucial piece of information in tying people to digital flows and fragments. In a criminal case, the knowledge provided by this association can open the door to a further search of a property and computer hardware, ultimately leading to a conviction. It a copyright trolling lawsuit, an IP address leads to the disclosure of subscriber information, which leads to the subscriber receiving a settlement offer/demand (unless the copyright owner chooses not to send one, after discovering the subscriber’s identity). It is all well and good to argue that an IP address does not identify a person, until you are a person at the receiving end of one of these letters. At that point, you, as an identified person, have some decisions to make.

I will spend more time talking about IP addresses specifically in a subsequent post, as these digital identifiers are important in a variety of contexts besides copyright trolling. In the meantime, I’ll be paying attention to the drawn-out saga of the Teksavvy – Voltage case and how courts around the world learn from each other in dealing with copyright trolling.