Can the Law Keep Up with Robots?

Preliminary tests on the Google self-driving car have proven it drives more safely than the average person.  But even with the latest safety measures, an inevitable accident will force the robotic car to choose exactly what to strike, and according to Edward Walters, ’96, the law—which often struggles to keep up with the rapid pace of advancing technology—currently doesn’t offer any guidance in this situation.

Using robots as an example, Walters, the CEO and co-founder of Fastcase and an Adjunct Professor of Law at Georgetown University Law Center (where he teaches The Law of Robots), visited the Law School to discuss how the law evolves as new technology emerges. This talk, sponsored by the Law and Technology Society, examined why new technology often poses questions that the law isn’t equipped to answer.

“The car has to make a decision whether to strike a pedestrian, hit a school bus full of kids, or swerve off the road and kill you,” Walters said. Google alone decides what path the self-driving car will take, even though these cars are already allowed on the road in many states. 

The absence of legislation developed for the self-driving car illustrates a crucial characteristic of the relationship between technology and the law. With each technological revolution (be it industrial, informational, or robotic) the law usually trails behind—in the case of the Industrial Revolution, by about 75 to 100 years.

Walters explained this phenomenon using Moore’s Law, which states that the number of transistors on a circuit doubles every two years, making the increase in computing power exponential, not linear. It follows that the capabilities of most electronic devices are increasing so quickly that “the law is constantly chasing technology as it advances.”

Some believe that we shouldn’t create new laws for technology at all and that we must rely on common law instead. Walters recalled Senior Lecturer Frank Easterbrook, a judge on the Seventh Circuit Court of Appeals, who said during the University of Chicago’s 1995 Law of Cyberspace conference, that there shouldn’t be a “law of cyberspace” any more than there was a “law of the horse”—a common law system must apply existing law to new facts. At the same conference, Lawrence Lessig, then a professor at the Law School, argued the opposite. His view was that many technological advances change the underlying assumptions of the law, and that applying existing common law would upset important values of the legal system.

Walters believes that it is “not all one or the other” when it comes to regulating new technology.  In his Law of Robots course, he teaches that there are some cases where common law can be applied and others where it would yield results that society would find objectionable. There is no formula for determining whether common law will work, but it can help to ask the question, “When you apply the existing common law, does the outcome comport with our notions of fairness?”

While the discussion of law and robots may appear to be the stuff of the far future, Walters reminded his listeners that we need to consider regulating this technology right now.

“Law of robots sounds like fiction,” he said, “but law of robots is not our future, it is our present.”

Examples are everywhere. Walters pointed out that robots are responsible, in part, for manufacturing cars, performing surgeries, trading stocks, and policing borders. Machines already do this work, almost invisibly, but there is very little thinking being done about how and whether we can apply existing law to increasingly autonomous machines.

At the end of the talk, students asked Walters questions, and many revolved around the law of self-driving cars. One student wondered if excessive regulation of these vehicles could reduce their efficiency. Efficiency is relative, as evidenced by Walters’ counter-question:

“If you knew that Google programmed your self-driving car to save the most lives over saving yours, would you buy that car?”