Michael Chertoff, "Security and Privacy in an Age of Exploding Data"

With commentary by Professor Adam Chilton

As Secretary of the U.S. Department of Homeland Security from 2005 to 2009, Michael Chertoff led the country in blocking would-be terrorists from crossing our borders or implementing their plans if they were already in the country. He also transformed FEMA into an effective organization following Hurricane Katrina. His greatest successes have earned few headlines – because the important news is what didn’t happen.

At The Chertoff Group, Mr. Chertoff provided high-level strategic counsel to corporate and government leaders on a broad range of security issues, from risk identification and prevention to preparedness, response and recovery. “Risk management has become the CEO’s concern,” he says. “We help our clients develop comprehensive strategies to manage risk without building barriers that get in the way of carrying on their business.”

Before heading up the Department of Homeland Security, Mr. Chertoff served as a federal judge on the U.S. Court of Appeals for the Third Circuit. Earlier, during more than a decade as a federal prosecutor, he investigated and prosecuted cases of political corruption, organized crime, corporate fraud and terrorism – including the investigation of the 9/11 terrorist attacks.

Mr. Chertoff is a magna cum laude graduate of Harvard College (1975) and Harvard Law School (1978). From 1979-1980 he served as a clerk to Supreme Court Justice William Brennan, Jr.

In addition to his role at The Chertoff Group, Mr. Chertoff is also senior of counsel at Covington & Burling LLP, and a member of the firm’s White Collar Defense and Investigations practice group.

Adam Chilton's research interests lie at the intersection of empirical legal studies and international law. His current research projects examine the ways that political considerations affect the United States' international trade and investment policy; whether enshrining rights within constitutions improves the protection of those rights; the comparative competency of the executive and judicial branches in foreign relations law; and how experimental methods can be used to study whether domestic politics influence compliance with international law.

Adam received a BA and MA in Political Science from Yale University. After college, Adam worked as a management consultant for BCG. He then went to Harvard University, where he earned a JD as well as a PhD in Political Science. Before joining the faculty, Adam taught at the Law School as a Bigelow Fellow and Lecturer in Law.

Presented by the Federalist Society on March 28, 2016.

Transcript

Announcer:          This audio file is a production of the University of Chicago Law School. Visit us on the web at www.law.uchicago.edu.

Host:               The Federalist Society is very excited for this talk today, "Security and Privacy in the Age of Exploding Data." We're very lucky to have former Secretary of Homeland Security, Michael Chertoff, here to espouse on that topic. Secretary Chertoff graduated from Harvard undergrad and Harvard law school going on to clerk for Judge, and I'm gonna pronounce this wrong, Murray Gurfein of the Second Circuit. And then went on to clerk for Justice Brennan of the Supreme Court. He was actually Third Circuit judge from 2003 to 2005 before being asked to be Secretary of Homeland Security from 2005 to 2009. After he left, he started the Chertoff Group, which provides counsel for both private companies and governmental entities for risk assessment. And he's also senior counsel at Covington & Burling in Washington DC. To provide a few comments on his remarks, we have our very own Professor Chilton. Professor Chilton graduated or received his undergraduate degree from Yale as well as a masters in political science. He then went on to get his PhD and law degree from Harvard, and he now works on empirical legal studies and international law. So with that I'm going to hand it over to Secretary Chertoff.

Chertoff:           It's good to be here. I see you all get a lunch, but you have to listen to me so that feeds the attitude that there is no such thing as a free lunch. I'm going to be talking about major exploding data, but not literally exploding data. Just exploding in the sense of the volume and the ubiquity of it, though we will probably come to explosions later. There's a saying in Silicon Valley, which is that you get disruptive technology. Actually, when I was younger and I was in like grade school, being disruptive was considered a bad thing. But now, disruption is viewed as a compliment. You're disrupting or disordering to create great new technologies, and part of what I was going to talk about today is I think we are entering an age of legal disruption. When the set of legal rules about the way we conduct ourselves in a whole variety of ways is getting disrupted by the explosion of technology. Sometimes people talk about this as the internet, but as I'll explain, I think it's further than the internet. It involves a whole series of things that have arisen in the last 10 years that transform the way we operate in our business lives and our personal lives and among each other, as, as social animals.

Chertoff:           Since I am in a Law School, I will indulge in a legal discussion and I actually am in the process of writing a book about this topic. So this is kind of road testing this with all of you and I hope when we have question time, you guys can, you know, uh, you know, poke holes in what I'm saying or raise issues I haven't thought about. If you go back to the history of privacy until relatively recently, privacy was about physical space, and you go back a couple hundred years to the famous English case of Entick v. Carrington about whether you could go in and search a person's home and look into their papers. And it was about the old English saying that, and I know it's anachronistic in terms of being sexist, but every man's home is his castle. 

Chertoff:           And that is really the way privacy was thought about, as a matter of physical space. But the technology began to challenge the underlying assumptions behind that model. First you had the invention of photography which enables you to record someone's presence in the location and preserve it. Then you had a telephone and with the telephone the ability to wire tap and overhear conversations. And when those issues first came before the courts, they were analyzed in terms of the traditional physical space. If the issue of whether I could take your photograph and put it on an advertisement was analyzed in terms of, well, is it a copyright violation or am I intruding into your personal home when I take the picture? Uh, the issue of wiretapping was discussed in terms of am I physically penetrating space which is your private physical space. But as the years went by in the early part of the 20th century, the courts kind of stood back and said, well, the technology really disrupts traditional legal framework and we have to create a new one.

Chertoff:           So for example, as it relates to photography, uh, after a seminal article by future Justice Brandeis and his partner Warren, uh, there was a case called Roberson v. the Rochester Folding Box Company. You probably learned this case in Contracts or Torts, and there the Court said, you know, we do need to look at the question of whether you have a proprietary right in your image. And it starts with somebody cannot simply take your photograph, printed on a box of soap suds, and imply that you are endorsing the product. Likewise, there was a series of developments in the fourth amendment that took the discussion about wiretapping out of the realm of having physically penetrated into space, into what in the Katz case in 1967 was described as the reasonable expectation of privacy, and the court there saying it's not about protecting physical space, it's about protecting the value of privacy. 

Chertoff:           That's what the Constitution is concerned about and that's what we're going to look to. And there were some intellectual flaws with that formulation, but it's an illustration of how the technology disrupted several legal orders. And I'm gonna suggest throughout the course of my talk here that you're going to see similar disruptions in the way we currently look at the fourth amendment and similar types of privacy protection laws because of dramatic changes in technology across the board. So what do I mean by changes in technology? It's pretty common to talk about the Internet transforms everything. The Internet itself is actually only a part of, as many of you may know, the Internet began as darkernet. It was supposed to be a mechanism to allow people in the Defense Department related academic community to communicate with each other in the event there was a major degradation of our normal communication systems by relying on the fact that you could break that down into digital packets and send it over a variety of different transmission, uh, media and then reassemble it at the end point. 

Chertoff:           And what that required was a hypertext transfer protocol which was ultimately put together by some folks including Bob Kahn. That was simply a way of taking the data and moving it efficiently over long distances and to a lot of different people. And it was meant originally to simply be a Defense Department and academic program. And in fact it was against the rules originally to use it for commercial purposes, but of course it spun way out of the original conception of the people who put it together and now it becomes a major element of the way we transact business globally, but if it was only about the Internet, this would not be a revolution or an explosion of data, because the Internet gave rise to other developments in technology which had been, in my mind, at least is transformative if not more transformative. One was storage of data. 

Chertoff:           If you couldn't store data indefinitely in large volume, the Internet would be a transitory phenomena, but Moore's Law which says the ability to hold that on chip increases dramatically every period. This led to a point at which we can literally whole libraries of data on a single device in a way that was unimaginable 20 years, and so the Internet spurred the development of the capabilities to store data that revolutionized the way in which information could be held and also disseminated and also preserved in a way that would make it accessible for people years from now well be on the lives of the people who generated the data. But even that wouldn't be enough to create an explosion of data if you didn't get the processing power because if you have terabytes or petabytes of data, no human being has the time to sit and go through it or correlate. So what was required was the ability to process the data and to analyze it and to do it in a way that's at machine speed, using algorithms and computer processing that now enables you to sort through the information, to draw analytical conclusions from it at enormous speed and that's what created the incentive in turn to collect more data which meant bringing more over the Internet and store more data because you could now make use of the data for a whole host of things. 

Chertoff:           And we've seen that in everything from the ability to use it for security issues, but even more dramatically for commercial purposes. I mean it is now possible to relentlessly market to you based on the accumulation of data, about what you do and where you are and where you've been and what you read. That would have been unimaginable even 10 years ago. In the old days, the principle way people spent money on marketing was to buy billboards and television ads in the hopes that interested people would pay attention has now been superseded by an age in which if you walk into the store, uh, you're going to start getting messages on your sms or in your email telling you about a deal for a particular product that you could be interested in which they're going to know about because you have searched for it previously using another device, but which is correlated with your device. 

Chertoff:           So that is an element of microtargeting again, which required all of these things. Yet a fourth revolution is now developing in the area of data, which I think is even going to make it more significant. And that is the use of data to actually operate and control systems. So it's not just about what you read or what you see or what you're sensing about where somebody is or what it might be available to you, but it is literally running the lights. You know, we talk about the smart grid. The smart grid is about being able to connect up all of your energy using devices in a way that can be monitored by a company, ideally promote energy efficiency, but if it can be monitored and that means you can turn it on and turn it off. And those of you who follow, for example, as some of the presentations you get out of Las Vegas when they do various kinds of conferences like Black Cat will know that they demonstrated that you can remotely control of an automobile and affect its steering and its braking and its operational. 

Chertoff:           Or you can hack into somebody's insulin pump. If it's wireless or connected to the outside, you can affect that. There was a story just in today's Wall Street Journal about Iranians have now been charged with that remotely penetrating a small dam in upstate New York and taking, actually try to manipulate the industrial control system. And by the way, they were evidently able to discover their vulnerability by going on Google and using a widely available program that would identify the fact that this control system was hooked up to a website and was unprotected. So that is the fourth dimension of the way data is now working in the real world that's going to be, may actually yield explosions. And finally this will be imprudent if I did not remind you, most of you probably know this, that what many people think of as the web, or the Internet, is only a very small fraction of what it is. 

Chertoff:           If you ask the average person, they think that the Internet is what you can find on Google. That's about three percent of the data out there. The deep web is all the data that's not easily searchable. Some of it is on private networks and isn't particularly nefarious, but there's a species of what is on the deep web called the dark web, and it's essentially a marketplace for all kinds of activity. Child pornography, weapons, drugs, even the ability to hire people to commit murders. All of this is accessible over the internet. You're not going to find it if you look up particular sites on Google or Yahoo. You may need a password to get to the site so you can be vouched for, but it is something you could connect to from another side of the world if you know the right pathway and the right IP address. 

Chertoff:           So that's the landscape in which data really has exploded and it's not just the transmission of data, its the collection of data, the storage of data, and the use of data which are in my mind have really created a massive disruption of the landscape against which our legal principles operate. So what's the first element of this we need to focus on? Well, the first is what is the threat that this poses? I mean, there are many good things the internet does, but there are many threats as well. Crime, of course, is what comes to mind first. And just like Willie Sutton said about bank robbery, which is "I rob banks because that's where the money is," not surprisingly, the money's on the Internet and that's why massive criminal attacks take place literally on a daily basis against the internet, and a lot of the stuff that you've seen or perhaps experienced: identity theft, efforts to download or transfer money from accounts. A couple of years ago, in a single day, 

Chertoff:           over $10,000,000 was withdrawn from ATM machines around the world because an organized criminal group was able to hack into a couple of companies that manage debit cards and they're able to remove the software that creates withdrawal limits on the cards, and then they were able to generate additional cards with that key into that flaw, and in a single day, they sent people around the world to bank ATM machines where the only limit on the amount of money they can take out was the total amount of cash in the machine. It was over $10,000,000 in a single day. You know, whenever I give this talk I have to keep current because there's always something that tops what I just talked about in the last 24 hours, so you probably read in the last week about over $80,000,000 that were stolen from the Bank of Bangladesh account at the Federal Reserve Bank of New York using the Internet. 

Chertoff:           And the only reason it wasn't a billion was because the crooks made a mistake when they did some of the typing for some of the destinations and somebody noticed that there was a typo and that caused an alert to go up, but over $80 million went out the door and somewhere in the world now because of that theft. Now that is criminality on a massive scale. We've seen ideological attacks, efforts to exposure and batter people that are politically unpopular or who have... someone who dislikes them because their political agenda. We've seen denial of service attacks which are designed to interfere with the ability of ordinary people to let's say, get to their bank website and there were a series of these over the rest of the years into some of the most sophisticated banks in the United States. They were dealt with, but they created a lot of stress on the ability of the system to continue to function in a way that was convenient for customers. 

Chertoff:           But more alarmingly perhaps, I think what we're going to see more of in the future, are destructive attacks or attacks that again, taking advantage of that element of what is the data revolution with respect to industrial control systems, attacks that are designed to actually deterior the ability to operate an industrial control system. We saw that in 2012 when Iranian sympathizers hacked into and destroyed 30,000 desktops and other machines at Saudi Aramco, including the data on it, which were destroyed forever, thereby impacting the Saudi oil company. In the last couple months, right before Christmas of last year, in the Ukraine, a couple of power stations were put out of action and power was lost to some portions of eastern Ukraine by Russian sympathizers who hacked into those substations and actually were able to damage and destroy the control systems that operated the substations. And as I mentioned a few minutes ago, it appears that an Iranian group, which has now been charged, 

Chertoff:           tried to get into a dam in New York and actually operate it in a way that would destroy its ability to function properly. Now. It wasn't a very big dam and it wouldn't have been catastrophic, but it's not hard to imagine what would happen if it was a big dam for another major element of infrastructure. And finally there's one other dimension of working with your net, his report in terms of new threats or newly manifested threats in a somewhat different order of magnitude and that is the ability of the Internet to become a tool for recruiting and operationally planning and executing criminal activity and terrorist activity. And of course the example of that is what we're living through now with respect to what's going on in Belgium and France. Terrorism has been with us for hundreds of years. The ability to scale it, the ability to operate over distances, the ability to recruit people in place so they don't have to come face to face with you. 

Chertoff:           That has all been enabled by the Internet and that creates another dimension of threats in the modern world that this explosion of data and the Internet have now created. And where's this going to go next? Well, I think we're going to see an increase in the ability of people to recruit and train over the Internet and training includes, by the way, educating people on how to build bombs are, there are sites you can go to where it's, you know, how to build a bomb in your mother's kitchen using ordinary household ingredients to build a destructive device. And as multiply the Internet of things, that's to say the smart phones, the smart refrigerators, uh, the smart watches, those things are again going to become entry points, uh, not only for attacking the device itself, but for attacking anything connected to that device. And that means the attack surface area for which crime or destructive attacks can occur is going to increase exponentially and that's going to create even greater risk. 

Chertoff:           So what I'd like to do in the balance of my target is now that I've created this really a dark picture of where we're headed with this is talk about how it's going to affect really three sets of relationships and three sets of institutions. One is the government. What's the government going to operate like and what are the constraints can be under government in this new world of the data explosion where there are both new obligations? The government has to protect us against new threats, but also the government itself can become more threatening. Second and maybe a little less obvious, but what's this going to be for the role of corporations that are operating the data sets and the data transmission platforms through which all of this occurs? Because they are now going to start to pose issues with respect to protecting the database as well as potentially creating issues for the citizens who willingly or even unwillingly are participating in accumulating the data these companies have. 

Chertoff:           And finally, how is the government and these corporations going to relate to each other because many of the controversies that we see now related the fact that in in large respects, the greatest intelligence collection platforms in the world today are not governed platforms. They are private platforms. Not surprisingly wholly apart from the question of whether those companies need to be controlled and restrained in what they do with the data is the obvious temptation governments will have to say, wait a second, you got more data than I do, I'm going to deputize you and you're going to collect the data and furnish it to me, so you're going to expand my reach as a government. So these are the kinds of issues I think we're going to be facing among others with the explosion of data. So first, what is the role of government? 

Chertoff:           And I come back to the original notion of there was a revolution when we came to the invention of telephones and wiretapping and photography. Well, given the amount of data that is generated, there is now, I think we're on the verge of another revolution in the way the law looks at the government's ability to obtain data, and I'll give you two examples from two recent cases. The rule has always been traditionally, if you're in a public space, you don't have a privacy right. you do something, you can be observed, you can be photographed. Government doesn't need a warrant. If they want to assemble that together and make a case against you, they're perfectly free to do that without getting a judge's permission as long as they don't actually wiretap or literally intrude on your physical space. Um, and that's where we make cases for years. But in the Jones case, they took technology and they put it on the bumper of a car and they were able to follow that car. 

Chertoff:           Obviously it was a drug dealer's car. For four weeks using the locational data generated by that device. And the question is whether at that point you had a search that required a warrant. Now the Court kind of, in Jones, said you did, but they kind of, the late Justice Scalia rested it on the physical notion that you had attached something onto the car. But if you read the opinions, what you'll see is most of the Justices say, you know, that's not really what's at issue here because you could simply capture the locational data off your Onstar or off your GPS or off your Sirius radio and do the same thing. Now do you need a warrant to do that? And I think if you read the opinions, most of the Justices seemed to say at some point the amount of locational data that is generated by devices is so comprehensive and it reveals so much that you can't simply say, well, you're in public. 

Chertoff:           You've got to start to view this as potentially implicating a more serious right? That needs to get a level of protection. Let's sent by warrant, uh, in fact, I think it was Justice Sotomayor who said, you know, even taking public data and subjecting it to an algorithm that analyzes it using big data analytics. My take is to the point where the data, again, even if it is public, requires an additional level of protection because it's so could impinge on what somebody does every day of their daily life. Maybe we need to have a warrant requirement. So I think this issue of the scale of data that is collected, the ability to keep it forever and then as I said, the ability to analyze it may transform the old rule that where it's public is public in which private is private into one where what's public maybe needs to be treated if

Chertoff:           it gets to a certain scale with some of the protections we normally associate with privacy. Another example of that is the Riley case, a search leads to an arrest and they want to get the cell phone and they want to open it up and see everything that's inside of it without getting a warrant. And the rule is traditionally I can search your body, I can search your, anything you have with you, the rationale being that you want to preserve the evidence and you want to prevent somebody from reaching for a weapon. The question is, does that change when it's not only what's on the device, but the device is merely a key to getting into a data storage capability perhaps somewhere else in the world which has all of your data. Is that truly a search incident to arrest? And the court said, no, it's not. 

Chertoff:           It goes beyond the rationale. But again, if you get outside the narrow frame of the case and you look more broadly, what does that mean for a whole host of situations where the government gets a hold of a phone and they want to search it? Is that going to be treated as if it's just a physical device, like a suitcase or a briefcase? Or are there special rules? For example, when you cross a border, the general rule is you can be searched and anything you bring into the country can be searched and that's been true of laptops as well. Anything on the laptop can be searched because you're bringing into the country, but what are you doing on the laptop or the phone is really the key to data that is stored in the cloud and the cloud may be in Cupertino or it may be in Seattle or may be in Phoenix or someplace else. 

Chertoff:           At that point, is searching the data that's accessible on your phone, your laptop searching something you're bringing into the company or is it searching something that's already in the country? Is it akin to searching your briefcase or is it like finding a house key on your person and then go into your house and searching your house because you crossed the border? I suggest, again, this is going to be a paradigm shift in the law. So those are some of the issues I think the government is going to have to do at the same time that the ability to collect metadata and huge volumes of data and analyze them with respect to potential terrorists becomes really the critical tool in stopping terrorists because the terrorists that you don't know are the ones you already have and that means you're searching for a needle in a haystack and you can't search for the needle if you don't have the haystack. 

Chertoff:           And that's why there's been a big debate about metadata which resulted in Congress passing a law last year that at least requires metadata years to be held by the Internet service providers and not by the government even before search. So that's one area where I think we're going to see a revolution. What about companies themselves? So the companies like to say we're only in the business of providing a lot of free stuff and you know, we're here to do good. But the reality is there's a lot of data that they have. There's an old saying that if what you're getting is free, you're the product and you are. Your location, your desires, your interests, your transactions, your health are great to find markets. What is the limit on the ability to use? Right now, we're operating on a on what we call a consent based model where you read through 900 pages of fine print before you can log onto the site to buy a pair of pants you want to buy in which you've waived all your rights and perpetuity, annual immortal soul as you click yes and then you go forward, and I'm guilty of this too, and the question is whether that is rarely an appropriate way to deal with this anymore or whether we're at the point now with the amount of data that's collected and the uses to which is put are so massive that really most people can't understand what they're agreeing to and one way to look at it this way, you could... 

Chertoff:           You can separate three cases where data is used by private companies. Now argue that, that there should be different rules for each case. One is when you use data literally to improve the actual application or website that is being interacted. So if you're on, on uh, on Google Maps and they need data and using it only for purposes of locating where you are so they can give you directions, you might already by definition you've kind of implicitly consented to that and you shouldn't be required to do anything more than they should have to make you do anything. But if they use it to sell to a third party to market things, then perhaps it needs to be a more explicit message you get saying we're going to use this to have retailers send you ads. Do you want us to do that? And then what about the following situation where, um, people in this room, you know, our record this or take photographs and they may individually think what they have is their individual photographs of me or the recording of me. 

Chertoff:           But if you're all using one of the major cloud service providers, all that data is uploaded to the cloud. So now the cloud provider who has the ability in almost every case to look across all of your accounts and and look at the data in the accounts could decide that they want to assemble a picture of what I do and like by going around and taking any photograph or any recording of anything I've said anywhere that anybody's recorded that's in the cloud, aggregating it together and creating a profile of me and I've never clicked on "I consent to this." So at that point, should the law be "can't do that." You've got to give a notification and you've got to say we've collected a lot of data about you, you consent to our using it or not. And if the answer is no, they have to get rid of. 

Chertoff:           So that's gonna be a next set of issues that again, I think the law is going to deal with and at the same time and they get to look at the other side of the coin. Having collected this data, companies are going to be more and more obliged to protect the data and that means it's not just a question of them agreeing to hold it private, they have to have the capability to do it, which means they have to be able to build and operate their own cybersecurity and more and more that's going to be dealing with the government because the government has much of the insight about where the threats are coming from and the government also would like to share what it has and see what the companies have in terms of cyber threats. So we could all raise the level of security. And that leads

Chertoff:           me to the third and final part of what I want to talk about, which is how does the relationship between government and corporations change in this new world of exploding data? And as I indicated earlier, um, you know, we see this already in the, in the case of the government looking at the private sector and you saying have a lot of data we'd like to get that data. And it comes in a variety of different ways. For example, social media. There is, as I said earlier, a lot of recruitment that goes on using social media and people who, for whatever reason, will often advertise that through on the verge of carrying out a terrorist attack. What is the obligation of companies in this business? I could shut down sites that recruit or incite violence or to give a tip off that someone's walking around taking videos of themselves, making bombs in their kitchen and uploading it to their social media site. 

Chertoff:           Well, I think the companies are now talking to the government about what that is. It's always been a positive that most companies, for example, not to allow child pornography and things of that sort to be uploaded and transmitted. How does that apply in the area of terrorism? At the same time, there are some countries in the world that will go further and say, anytime somebody criticizes the president of the country, we'd like to know that and we would like to have their IP address and then we'll just shut it down and of course we under the first amendment would consider that out of bounds. So that's one set of issues we're going to be dealing with. A second set of issues is when data is held in the cloud, it's generally held in the... It's not really in the cloud, obviously it's a bunch of servers somewhere in the world, often in a place that has a pretty temperate climate, low energy costs, and is physically located near enough the customer base that you can actually eliminate the later wait and see in the way you transmit that. 

Chertoff:           What do you do when the data's in one place and the requesting legal authority and another place serves a warrant or a subpoena and says we want the data, and the company says, well, wait a second. You want the data in the US, but in Switzerland I can't give you that data. You have to go to the Swiss authorities. Who wins that fight? Right now, there's litigation going on in New York over that involving Microsoft where the data is being held in Ireland. And it's easy for us to say, as Americans will look, obviously the US court, if you're doing business in the US, the company should have to comply, but that really begs the question, is the information company's information like billing records or is it information really belongs to the person? And it's like a safety deposit box where the bank needs to get the person's permission to get into the safety deposit box. 

Chertoff:           And what if the situation is reversed? What if North Korea says we'd like to get information about certain people who were writing bad things in the US and they try to target a company that's doing business in North Korea. Not that there are many of those. What would we say to that? So I think this is again a second set of issues whose law controls in a world in which data knows no boundaries, but we still live in a physical... And finally I'll conclude by talking about another manifestation which is the issue of encryption. You all read about the Apple case, which is now on hold, but it begs a larger question. Should the government have the ability to require companies to essentially create back doors or limit their ability to develop encryption techniques because that may mean the government will not be able to get encrypted data? And that's to my mind, not even so much a question of you traded off security for privacy as you trade off one kind of security for another kind of security because when you limit encryption, and particularly innovation encryption, you are creating vulnerabilities that are accessible to people like the ones who robbed the Bank of Bangladesh. 

Chertoff:           So in many ways if we want individuals and companies to secure data, we have to give them the tools to do it and it's very difficult and probably impossible to create a tool that is only going to be available to the government, the lawfully constituted government and is not going to be found by somebody who's a criminal or a terrorist or a, you know, some other kind of adversary or bad guy. So again, the very nature and the configuration of the way in which we secure data is creating a tension between two types of security and these issues, which I think you're going to be embedded in legal developments over the next 10 years are going to be your problem when you get off of Law School. So it's actually a really exciting time because you're living in the middle of a disrupting legal revolution and it's one in which knowledge of technology and knowledge of the law are going to be tools you use as we try to work our way forward. So with that I know there are some comments and I'm happy to take questions.

Chilton:            I want us to leave as much time as possible for questions and I think that the summary of both how this issue is developed and where we currently stand was fantastic and I can't think of much to add to it, but I do want to say something about each of the relationships that was highlighted. That is the relationship with government, the corporations, and then between the two. So on the government. I have two thoughts on this and how we should think about this. So the first issue is I think this is really a unique policy area in that it defies analysis from people outside of the government. So what do I mean by this? So normally when a large policy initiative is passed, the Affordable Care Act, the Immigration Different Executive Action, or potentially large trade bill, like the Trans Pacific Partnership. Academics, think tanks, et cetera are able to find estimates of the effects of that policy, the costs and benefits

Chilton:            so we can then have an informed public debate about whether or not this is something that we're willing to pay the cost for and exactly where those costs are going to fall. Now, in the privacy area, this is not the way things typically play out for a few reasons. So first, the policy interventions that governments take are typically confidential. As a result, even if we know that the government might have some pieces of information, we don't know exactly what and how they get them. The second problem is that the application is usually incredibly broad. What do I mean by this? Uh, it's the case that we all use the internet. We all use gmail, Google, we all have cell phones, smartphones, et cetera, and there's the result that whenever you try to study a policy intervention, you need a control group, we don't have one anymore because the Internet and data security are so ubiquitous; this also makes it nearly impossible to study. 

Chilton:            Now, the third problem is that the events that we're concerned about are exceedingly rare. These are black swan events. Those are two kinds of events, but it's one big terrorist attacks or other kinds of crime, uh, that we're trying to deter. Now, it is the case that they do happen all the time, whether or not it's robbing money in Bangladesh or attacks in Belgium, but it's difficult to know when the government tells us that they deterred X amount of terrorist attacks what the counterfactual is, right? These are rare events unlike, you know, providing people with healthcare where we can measure millions of people having healthcare and the cost to the economy. But also the data breaches that we're about her exceedingly rare, right? Part of the reason that people are concerned about the government having access to their email account is that at some point a major data breach may occur like happened with Ashley Madison or that happened through the 4Chan and Reddit leaks 

Chilton:            of celebrity nude photos, whatever it is. These don't happen that often, but that doesn't mean that they're not concerns that we actually have. So the consequence of the fact that these are, these are, um, classified policy interventions that are broadly applied that are trying to prevent rare events make it really just the most difficult question to, to try to form opinions on. As a result, the policy debate that we have is frequently impoverished where people are taking stands based on prior ideological commitments or their love of Apple, whatever it is, but it's informing what they're saying and at least at the base even though the facts I think are pretty few and far between for those of us in the academic community at least. The other thing that I'd say about the government that we need to think about because I think we need a new, uh, to think how we can scrutinize the government's actions. 

Chilton:            So, so far the way that we've scrutinized the surveillance state that's emerged, uh, since 9/11 has been really through journalists analyzing Wikileaks documents. Academics have stayed away from it. Most think tanks have stayed away from it, but that's the kind of scrutiny that's taken place. What the traditional methods of scrutiny, either Congress or issuing warrants, it's not clear whether or not they can be effective when, what part of what the government is trying to do is have confidential security programs to try to monitor communications. And so it does seem to be the case that this is necessary to prevent terrorism, recruitment, planning, etc. But some form of scrutiny needs to exist and it's not clear what model works right now. And I don't think that we have a particularly good one. So it is true that there's new technology that's changing the role of the government. But right now we don't have a particularly rich debate on what the government's doing and how we should think about those trade offs. 

Chilton:            All right. So that's my thought on the first relationship. Um, the second thing about corporations: I completely agree with this sentence, uh, if you're not paying, you're the product, and I think, uh, that it highlights something incredibly important. That is a disconnect is a merge in the way that we think about technology between security and advertising. So what do I mean by this? The most notable example is the recent Apple/FBI dispute over whether or not they should open the iphone from the San Bernadino attack. Uh, and in this case policy, privacy advocates have flocked to say Apple needs to encrypt the data. What people do on their own phones is important and we don't want any risk that this information can get out. At the same time, in Apple's user agreements, they make it perfectly clear that they have the right to the data of everything that you do on your iPhone. 

Chilton:            This has been happening for years that the data that you have, the transmission, download an app, use an app, etc. Is being sold to advertisers. So at the one that we have in this situation where we're all excited about new Google products, new Apple products, Twitter, Facebook, whatever it is, where we're providing data that is being sold to advertisers, but then rally around defensing, uh, uh, these companies when they try to defend using it against the government. So I think there's a disconnect that exists on how we think about what it is that we're getting. Now, in the case of gmail, it's clear what you get in exchange for your, your use, right? You get free essentially unlimited access to email and exchange for like on the side of your email, it'll try to to pay for a bar study course or something like this for the entire time you're in law school, uh, or for years after, as the case may be, but maybe it's the problem with that, the government's use of data, as I said previously, we don't know what we're buying with giving the government data and it makes us unsure, right? 

Chilton:            We don't know what that trade off is, but I think that this is an important, an important disconnect. Now finally, how will the government and corporatIons relate to each other and how are we going to think about, uh, what obligations can the government impose? So the first thing I would say here is that I think the current track of regulation that we've had to regulate data security and data use has largely failed. So that path is mandated disclosure, right? We pass laws or regulations saying companies are obligated to say what it is that they're doing with our information. Alright? So this is how you end up with an Apple iTunes user agreements that is 60 pages long for any song that you download and just impossibly detailed and complex. The result is that none of us can read any of these, and there's one estimate that if you were to read all of the user payments that you come across in a single day, it will take 100 days. 

Chilton:            And so, uh, it's just not possible, but this mandated disclosure regime can actually be used in this way. But beyond that, information is impossibly vague and there's often an indirect correlation between the quality of the disclosure and the security of the company. So if you were to find a company that's actually pretty good at data security, you'd see 100 page long disclosure and if is bad at data security, they'll have a two page long disclosure. It might seem safer, but it's not. All of this is just to say this is what we've been doing so far, is trying to rely on consumers making informed decisions. But we've created an environment where it's literally impossible to make informed choices. As a result, this line of regulation probably isn't going to work going forward and we're going to perhaps need more direct regulation as well. 

Chilton:            Uh, but the risk of course is that we kill what's great about the internet, the innovation, the free technology, the ability it has to improve our lives. So our policy now isn't working, but I think that a lot of us are afraid of afraid of the, the other side of the coin. Now, the final point on this I think is that all of these concerns really do highlight the coordination problem that we have in this area, which is to say that different governments can have different policy priorities, but since the government is transnational, excuse me, the internet is transnational. It's regulation of one state impacts the regulation of another. So this is playing out most clearly in Europe where France and a few other countries have increasingly been more open to a surveillance without warrants, extending the capacity of their police, etc. especially since the last November 

Chilton:            attacks in Paris. At the same time, Germany stood a lot stronger on trying to avoid this sort of surveillance, but yet whatever policies are made that affect Google or these sites that we all use impact everyone. So France will do things like extending its rights to forgotten, right? To be forgotten to every country in the world and Google has to comply. Now I think uh, you know, uh, as former Secretary said, we're all sympathetic to that and we think of our own government or the government acting in good faith, but the idea of any one government being able to set the policies that regulate the entire internet is potentially problematic, but yet we have no framework right now for how to regulate the internet transnationally in a way that is comprehensive, compelling, effective, etc. And so, uh, what format's going to take though I think is unclear. It's going to have to be figured out how to handle the coordination. Anyway, I'm excited for the Q&A. 

Chertoff:           Two little quick follow-ups and then we'll go right to questions. One is on the issue of encryption. I think one one line is pretty clear is this: if you, if the company that is the platform for holding the data has access to the data, they do not have the right to refuse the government that access. The price of being encrypted with respect to the government is that your encrypted with respect to the company too; you can't have it both ways. I think that that's, and some companies are, are, are pretty upfront with the fact that they're going to look at your data and they're not making the argument, uh, that we, uh, you know, we're going to deny this to the US government. It's where a company says our business model is not to look at your data. We want to host it encrypted and only you'll have the key that this issue issue comes up. We also, just in terms of the last point there was made by governance, um, you're going to see some of this coming out. 

Chertoff:           The US government has indicated that it's going to, ICANN, which is the International Corporation to Assign Names and Numbers, which it really sets up the schema for how we route data over the internet with IP addresses. That's been operated under a contract with the US, but that function, the US government has indicated it is prepared to surrender, provided that the international community can come up with a governance structure that will deal with this. Some countries want it to be the UN, but the US has really been strongly against that. We prefer a much more civic society oriented model. The concern with the UN is that you went as you've rapidly politicize the issue, and this is not just a technical question because without the director that tells you how to get to a site, you've essentially erased the site. So if someone can actually control the way in which names and numbers are distributed so that they can use that capability for political purposes, they've really achieved the ability to censor the internet. 

Chertoff:           So this may sound like a dry technical issue, but you ought to watch this because how this plays out will have an enormous impact about the way the Internet is in the future. With that, if you to tell me who you are and raise your hand, I'll call on you. Yes. 

Question One:       Thank you. Joe, 2L. You briefly mentioned you today's issues with social media and how the government is sort of prying into that sphere. This might seem like a counter-law school question, but what do you think the answer is in the sense where do you think that line should be drawn in the impact or at least at the level at which the government is looking at your social media, cooperating with these corporations or how much these corporations should give in terms of these profiles, like Facebook? 

Chertoff:           My understanding is that most of the companies actually have terms of use. When you sign up, you can't use them for certain things. And so when you violate the terms of use, and you've got to be clear about this, you can be removed or it can be recorded. And that's been the traditional rule that works with child pornography and I think frankly it's probably the rule that ought to work with terrorism. The tricky thing is that you want to make sure you're narrowly defining what it is that you're restricting. What you wouldn't want to do is have everybody who disagrees with the existing government is considered a terrorist and therefore they're taken off, and even the expression of power or refuse might be something that you want to not overly react to. So I think this is more a case of where you draw the line whether there should be a line. Yes? 

Question Two:       In terms of the case with geolocation, I always have had a problem with the rationale of what exactly makes that an invasion of privacy because as you said, if you're out in public, you can observe it and that's fine. And if I was like Sherlock Holmes and I could tell everything I needed to know about you from seeing you in public or if I had a very set of routine and you see the same thing because that wouldn't be an invasion of privacy. What exactly about the... other then it makes it more accurate for more people, I don't see why it's a problem.

Chertoff:           Sure, that's a great question because it goes right to heart of the paradigm shift issue. I think what the Justices are concerned about is um, in the old days, yes you could surveil somebody, but there was a kind of where I, where I call friction in, real life, there's a practical limit to the amount of surveillance you can do of any one person. Even if you took the entire FBI and just say, I spent a year following one person. There's a limit to what you can do. And then of course there's a limit to how long data can been stored and how easily you can assess the data. So in a sense there's a natural barrier to overdoing it. And the question is when the technology eliminates that friction and makes it possible literally to watch every single person in public, every minute of the day, and store it in perpetuity, have we, is it just a matter of degree or have we actually shifted it to a point where it's a, a change in kind? 

Chertoff:           And now under the current paradigm, you're right, it's your public. Anyone could watch you do it. I think what the Court's asking is there's the rescale of this changing so that we would wind up in being so constantly under the, under the microscope, given modern technology, that we should create some higher threshold. You know, you can't just randomly do it. You've got to get a warrant if you're gonna go beyond a certain amount of time or certain expenses. And like with any other paradigm shift, you can argue it both ways, but that's exactly what the issue is for that case. Yes? 

Question Three:     So thanks again for coming. As I told you before, I'm very flattered to have you here. My question was a followup on the public/private parternship for cybersecurity. I'm thinking back when Sony Pictures was hacked by North Korea and there's lot of lIke limitations at the time: did Sony Pictures mess up or did the government mess up? How could this be prevented? But I'm having trouble envisioning structurally what this partnership might look like. Would it be like, you know, dare I say like that, like Obamacare type structure where it's like a government in exchange and there's incentivizing product companies to opt into the exchange. Like, what would it look like basically? 

Chertoff:           Well, there are different things, I mean I think where most people would agree is there should be an exchange of information. Like if you see a particular type of malware and you're attacked and you detect it immediately, you are able to share it with the government, with other companies so they can stop it before it attacks again. And vice versa. If the government sees something coming they will be able to share with the private sector and warn you. It's a little bit like getting vaccinated, you know, if you can see the strain of the flu virus, you want to vaccinate everybody before they get the flu. You know, should the government actually be involved in defending the internet? There is a view that with critical infrastructure like dams or something of that sort, the government should actually be able to stop an attack if it sees it. The challenge

Chertoff:           there is twofold. One is typically the government's ability to repel an attack, which is what we consider the Department of Defense's responsibility if you'd be, presumes that foreign actors have been coming in from overseas. But with the internet, you can't tell where an attack is coming so it required to think domestically. Also as a practical matter, you can't really defend a network if you're not on the network. So the government have to actually be into the networks of the major critical infrastructure and the answer may be with things like air traffic control and dams that the government could do that or would be on that. A lot of people are uncomfortable with that. So I think that that issue is likely to be one that's a harder one to solve for the information sharing generation. Yeah?

Question Four:      Taking on this idea of cyber security, you also mentioned in your opening remarks that new technology is changing a lot of first principles and so, um, some other advocates are pushing forward this idea of hack-backs where private entities have a right to go hack after them. What are your takes on that and does the change in the first principle that you mentioned constitute us to do the ability to do that? 

Chertoff:           So his book was called, Active Defense, people read different things from it. The most dramatic suggestion is if someone steals your stuff, you should be able to go into their server and destroy it or steal it back. And so if someone burglarized your house, can you go to their house and steal your stuff back? No. What happens when you take down, because rarely do people bring the stuff back directly to their own server, they jump around. What happens when you interfere with the server and you take it down and then a hospital goes dark. Are you going to pay the cost of that? As a shorthand way of saying it, I wouldn't try this at home. I think it's a bad idea for nongovernment actors to try to take the law in their own hands. Now you get into some gray areas in the following case: if somebody wants to steal something of yours, uh, can you embed something in what they steal so that it blows up when they take it back? 

Chertoff:           That seems a little more appealing. Or it could have the same effect of hurting an innocent person. What about this, as I'm moving along, getting closer cases, what about if someone wants to steal your intellectual property and you put something in the intellectual property that's simply bad information so you're not going to actually, excuse me, destroy the server, but when they build the airplane, it is not going to work. That's a little more tempting. And then probably the easiest cases, you have what they call a honeypot. You put something there that someone's going to want to steal. They, they penetrate your network. You essentially watched them. You lead... It's going to be harmless, but you use this as an opportunity to basically analyze the malware and figure out how to defeat it and that's probably okay. You know, I think you're seeing a lot of frustration about this, but it's a little bit like. I remember when I started out in law school, one of the high points you get is someone comes into your house and you've got a shotgun rigged up. So as soon as they opened the door, it blows their brains out. And most at least most kids in my class when we started were like, yeah, why not let's do that. But once you through it and say maybe that's an extreme reaction because it's capital punishment for burglary. So the intuition is here that you have to think them through a little more. Yes. 

Question Five:      I know we're short on time, but I was wondering if you speak a little bit to the how the social media screening by the government intersects with the Syrian refugee crisis. The current Secretary said something, was quoted saying like, you can't possibly like efficiently screen people's social media right now. I was wondering if you could tell me a little bit more about why that is and how that can change that we're able to open our borders to people who are innocent and need our help.

Chertoff:           Well, first what you would do from a legal standpoint is you'd get their consent and if you want to come to the US has a refugee, you have to consent to our review of social media. Then the only really practical limitation would be making sure that we are able to access the social media and know where you've been. The metadata is really quite good. I'm not telling you that nobody, you know, a really highly trained operative can probably leave very few digital footprints, but very few people actually have that level of training. So I think that it's actually pretty easy from a legal standpoint and of course you just, once you have the consent to be asking to you get the consent, if someone's done something on the social media, they would probably abandon their application. So I think that's, you know, there's no risk free thing, but I think the refugee stuff is pretty low risk compared to other things. Well, we have time for one more question. Yeah?

Question Six:       I have a question about how relevant our privacy concerns really should be in the upcoming 10 to 20 years? At least for me, I have an insecure kind of connections, so I have to use VPN for example. So if we get to a point where people are so paranoid, they always access the internet off of torr and VPN and they use aliases. You don't know who their IP address is. Does any of this really matter at that point? 

Chertoff:           This raises a couple of interesting issues. One is to what extent is privacy cover identity or should you be required to identify yourself at certain times? And I actually think there is a slight difference. If you want to search a site, maybe you know, the general public, maybe having anonymity is a reasonable thing. But if you want to communicate with me, I have a right to know who you are, if you want to transact with me, and if you're not prepared to identify yourself accurately, I should decline the transaction. So that's one set of issues. 

Chertoff:           The other thing you mentioned raising questions were if the people at some point will begin to opt out of the system or refuse to transact because they get nervous enough about their security or their privacy that they don't want to. You know, my general, and if I could leave you with a piece of advice is, to be mindful about your digital footprint. I mean, I'm not at the extreme of like totally off the grid, but every time I'm asked for an email address or I'm asked to, you know, part with some of my data. I always ask myself, how does this benefit me versus how does it benefit somebody else? And because I think the more you minimize your footprint, the more control you have over your data. And in the end that's what it's really about. You're controlling aspects of your personality, your interests, your behavior, and those are things you produce. 

Chertoff:           You should have a pretty fair amount of ownership over. Although it can't be perfectly. Finally, I will leave you guys with this tour, which is as you, as you guys know is the routers they add an anonymizer which is actually used in large part by the dark web for nefarious purposes. But interestingly it was originally funded and developed by the US Navy research wing because it was viewed as a good way to help people protect themselves. So this is a great area where, you know, there's always two sides to this and, and the question is not to create a law that's so clumsy that it destroys the benefit simply to prevent the downside or reverse, you know, allows the benefit but creates a huge downside. So you guys are going to have a lot of fun with this and good luck! 

Announcer:          This audio file is a production of the University of Chicago Law School. Visit us on the web at www.law.uchicago.edu.