Emeritus Luncheon 2022: Omri Ben-Shahar, “Personalized Law”

Omri Ben-Shahar spoke about his recent co-authored book, Personalized Law. The book presents a vision of a brave new world, where each person is bound by their own personally tailored law. “Reasonable person” standards would be replaced by a multitude of personalized commands, each individual with their own “reasonable you” rule. Skilled doctors would be held to higher standards of care, the most vulnerable consumers and employees would receive stronger protections, age restrictions for driving or for the consumption of alcohol would vary according to the recklessness risk that each person poses, and borrowers would be entitled to personalized loan disclosures tailored to their unique needs and delivered in a format fitting their mental capacity. Should we welcome this transformation of the law? Does personalized law harbor a utopic promise, or would it produce alienation, demoralization, and discrimination? The lecture asks how personalized law can be designed to deliver precision and justice and what pitfalls the regime would have to prudently avoid.

This event was recorded on June 8, 2022.

Transcript

Thomas J. Miles:

Welcome to our 2022 emeritus luncheon. It’s wonderful to welcome you. It’s terrific that we can gather. This is our first in-person emeritus luncheon since 2019. So, it’s very special to be with you.

And I send you all my best greetings from the Law School, from our faculty. This past Saturday, we held our graduation ceremony, our hooding ceremony. Also, for the first time since 2019, we held that ceremony in Rockefeller Chapel, and you would be enormously proud of the class we graduated, the Class of 2022. They accomplished a great deal, of course, during their time in Law School. They continued our tradition of rigorous and challenging education, despite all that they experienced over the past few years with the pandemic. And now it’s, they’re already back at work studying for the bar and they are preparing to follow in your footsteps as graduates of the Law School. And our faculty continue to thrive.

We have a number of new faculty members that are going to join us this summer, and our faculty—you would be very proud of how dedicated they have been to teaching our students during this period—and the faculty’s teaching, and, of course, their great scholarship, their ideas that really light up the world and give us new ways of understanding law and legal institutions. And the best way for me to convince you of the excellence of our faculty is to have a faculty member speak to you about their ideas, their pushing the boundaries of knowledge, their willingness to challenge conventional wisdom. And I’m very pleased to welcome today’s emeritus luncheon speaker Omri Ben-Shahar.

Professor Ben-Shahar is the Leo and Eileen Herzel Professor of Law. He is also the Kearney Director of the Coase-Sandor Institute for Law and Economics. Now, as you know, in an introduction like this, it’s very standard for a dean to recite the accomplishments, the credentials, the CV of a faculty member. So, I could go on and tell you about how Professor Ben-Shahar received his BA and his LLB from Hebrew University, about his SJD and his PhD in economics from Harvard. I could tell you about his time teaching at the University of Michigan before he joined our faculty.

But what I’d really like you to know about Professor Ben-Shahar, three things. One is that he is an exquisite, prolific, and creative scholar. Two, that he is a superb teacher, both of our students and of other scholars around the world. And third, he is a scholar whose ideas are influencing the practice of law. So, Professor Ben-Shahar is a leading scholar of contract law. He has written numerous books and scores of articles on contracts and its related topics, fault and contract law, gap filling, the right to withdraw, remedies, boiler plate, and so much more. And in recent years, he has focused his attention on consumer contracts. One of his recent books is entitled More Than You Wanted to Know: The Failure of Disclosure Law. Today he’s going to speak about Personalized Law: Different Rules for Different People. And this book is co-authored with our friend Ariel Porat, who is the former Fischel-Neil Distinguished Visiting Professor at the Law School and is currently the President of the University of Tel Aviv.

Now, as you will see, Professor Ben-Shahar’s work is deeply interdisciplinary. And in addition to being a top flight legal theorist, he has been a pioneer in bringing experimental methods into our understanding of law, specifically contract law. Now, Professor Ben-Shahar is also a very admired teacher. His courses on contract law, trademark, food regulation, and more are highly popular. I regularly hear students rave about his teaching and his teaching is not limited to our students. As Director of the Coase-Sandor Institute, he, for many years, led our faculty in teaching other law professors from around the world who would come to the Law School to study economic analysis of law.

And he has also brought our faculty all over the globe from Paris to Brazil, to China, to bring the ideas of our faculty in our Law School across the globe. Now, finally, Professor Ben-Shahar engages directly with our legal profession. He has been the co-reporter for the American Law Institute’s first restatement of consumer contracts. Now, as we all know, these restatements are enormously influential, both on courts and the legal profession. And again, Professor Ben-Shahar led this effort and after a decade-long effort, this restatement was just approved last month. So, there you know something about him that’s more than just his CV and where he went to college. It’s a great pleasure to welcome Professor Omri Ben-Shahar.

Omri Ben-Shahar:

Pardon me. I know I need to turn this on. Thank you. Thank you for more than you wanted to know about me. Don’t believe it all. Don’t believe anything. But it’s nice to hear such introduction and to acquire some capital before I destroy it all with the ideas that I’m trying to bounce off. This is a novel thought experiment. I hope that, if anything, it will get—it will elicit some reactions and I’d love to hear your comments and questions. So, one of the big things that are happening in our society in the past decade is the emergence of artificial intelligence in every sector of society. We see it on the internet. We see it with big tech companies. We see it with insurance companies and elsewhere. And one of the main challenges for law today is how to kind of tame AI, how to tame artificial intelligence so that it will do the good that it does without the harms that we fear.

It’s an enormous area of legal study, practice, and judicial developments. I’m not going to talk about that. I want to reverse that direction. I want to ask how law can join the party. How law can be fueled by AI. Why does it have to be just medicine, education, marketing, insurance? They all revolutionized their business, the way they do things with big data, with algorithms that find patterns in the data and show how to do the business better. Can the law do the same? Well, there are currently—I’m going to talk about briefly, just suggest two areas that I have not studied, but have been inspired by, and then suggest where I have taken it. One thing that artificial intelligence can do is, pardon me, replace lawyers. Can tell clients what is going to happen if they go to court. What will be the likely outcome of the dispute? You were injured, you were fired, you were discriminated against. Here is—based on the analysis of the things that are you currently show us, and by the way, this can be done in a manner of seconds: just fill in a few lines in a software program within—we will give you a prediction of what is the likely outcome of the case with 80 to 95% accuracy. Now take that and settle.

So, this is, you know, this, by the way is already happening. This is not science fiction. There are services that have trained algorithm. One of my students, a graduate student from Korea trained an algorithm that he wrote to predict how that trademark appeal board will decide similarity cases. What is, when two names—when there’s a registered mark and there’s an applicant that has a similar name. Like, let’s say, Honda and Hyundai. Is it too similar? Is it too similar to deny registration? Well, he trained an algorithm, believe it or not, to measure similarity and to make predictions on what the board will decide and is reaching close to 90% accuracy. These are very expensive suits usually. And imagine again, when we show it, when I showed the demonstration to the class, here’s a new case filed today, filed this week, press a button and within a split second, you get a 90% accuracy prediction of what will be the outcome of this dispute. It’s mind boggling.

So that’s one thing that I’m not going to talk about. The other thing I’m not going talk about is, pardon me even more, replacing judges. Not entirely. We are at the, at least in one—several tasks that are particularly suited for algorithms. And that is the part of the decision that requires prediction. A judge, and that occurs primarily, so far, you know, the examples that have been shown, in criminal justice. Let’s say in the bail decision, parole decision, sentencing decision. They often, they always require some kind of assessment of the risk that this particular defendant poses. Should they be released on bail? You know, the Constitution says is there, they must be, unless there is a predicted risk of reoffending or of flight. Well, judges make their best guesses. Sometimes they’re right and sometimes they’re wrong.

So, take a million past cases, show it to the algorithm. The algorithm will also see what were the features of each case, and they will begin to see connections that the human mind cannot see between features of the case, the defendant, what we, what we know about them and the outcome. And then they can start using this on cases they have not seen to make predictions. And in a study done by colleagues at the University of Chicago Business School. They have managed to train an algorithm and they did the tournament between the algorithm and the judges. Judges did not realize they’re a part of the algorithm. They took 25,000 cases in which the court made the decisions, and they told the algorithm, how would you decide the case if we were told you that you have to reach the same, let’s say the same ratio of arrests?

Well, it turns out that the algorithm would release more people than judges. Despite that fact, there will be less crime by those released because the algorithm finds the less risky people. And maybe the most important thing, at least the most important thing to me, it would release more members of minority, especially Black defendants. It’s kind of one of these mind-boggling reforms that is a win-win-win. Less jail, less crime, and less discrimination, without any cost. Other than I would say, maybe to the hubris—or not hubris—the self-esteem of judges. When I speak with that, to that with judges, just two days—two weeks ago, I gave a Zoom lecture to judges on that. It didn’t go so well. You know, that’s, that’s something, by the way, I’ve noticed reading about the history of technological developments that people who are told that their particular expertise can be done by some technology usually are the front line of rejecting or finding reasons not to do it.

So, now I’m going to talk to, you know, I’m talking to lawyers—and let’s call them the way they call them in China: law workers—law workers, about the fact that their work can be done by artificial intelligence. So now I come to my own work with Ariel Porat, which by the way, some of it was inspired by work that another colleague of ours, Lior Strahilevitz, has also done. And that is, we call it Personalized Law. It’s a book that we kind of published just this past year. And the idea here is to use artificial intelligence to tailor the actual legal rules.

And then what do I mean by that? That is kind of a little bit more ambitious. So, I’ll try to give a few examples. Imagine duties of care. Let’s say, how safe—how should you drive when you are on a country highway or in a street road, in a city road? Currently the law provides there is a sign, 45 miles an hour or something like that. It’s a uniform obligation, one size fits all. But does it make sense? Do all drivers have the same level of—create the same riskiness, the same eyesight, the same level of, you know, ability to react. People have different skills; different people have different—which both physical and mental—create different risks. Wouldn’t it make more sense to demand from riskier people to drive slower, from people who are better at controlling cars to maybe let them have a little more freedom? They don’t need to sit at 25 miles an hour in an open street.

That’s kind of one way to think about what we’re trying to do here. Eliminate the one-size-fits-all speed limit and make it individualized, based on what we know about you. Now, this is not science fiction. Insurance companies already know that, especially if you are one of the many people that have opted in to allow that your car has a recording device, that records how you drive, how you accelerate, how many sharp, you know, turns you take, hard breaking, nighttime driving, things that are risky. Insurers use it for pricing.

And by the way, it has an enormous effect on how people drive. Recent estimates are that it reduces fatal accidents by 30%. People drive better because they want to qualify for a lower premium. They want to get a better score. But the point is that the score is personalized. We have—my wife and I have a recording device in our car, built in by the manufacturer. And we have our safety score. And at some point, Sarah says, I don’t want—I want to have my own safety score. I don’t want to be-- You bring it down, I bring it up, you know? And so, we decoupled it and you know, it can see how, this is just to begin to give a little intuition and what it means to have a personalized duty of care. Shift now from duties to rights, think about consumer protection rights, the right to withdraw from a transaction, the right to get a more extended warranty.

The various protections that we give people, because we worry that they will regret, or they will make bad decisions when they take, especially to expensive things, like take a mortgage or buy something, retail, installment purchase, and something like this. Again, our protective rights, which in progressive states are fairly generous, they are all over the world, in every jurisdiction, every time in history, there’s one thing that unites them. They’re uniform. Everybody gets the same right. It is this uniformity that our book challenges saying, why should they be uniform. It is, by the way, uniformity is a myth. It’s what the law says, but we know that people kind of access their protections at different rates. I like to, at this point, to quote, in almost every talk of mine, I quote a sentence from the great French scholar and writer, Anatole France, that has really shaped my thinking about many things.

And what Anatole France said, sardonically or sarcastically, is that the law in its majestic equality forbids the rich as well as the poor to sleep under bridges, to beg in the streets, or to steal bread. Uniform law in its, you know, majestic equality. So, how about making the law less uniform? How about giving some protections only more protections to those who need the most and less protections to those who need them least. It’s another, again, a thought experiment. And maybe the right to withdraw from door-to-door sale should not be a uniform 72 hours for everyone, because particularly for those who are the targets for this sometimes abusive and predatory techniques, you need more than 72 hours to realize that you’ve made a really bad mistake buying for $2,000 a vacuum cleaner for your, you know, little trailer home that doesn’t even need—you don’t even need a rug, don’t even have a rug or something.

You know, whatever it is, there are protections need to be dialed up, but maybe they can be done in an individualized way. One more example, an example that shaped a lot of American equal protection, constitutional doctrine: the age of capacity. When do young people reach a level of maturity to, let’s say, drive or purchase alcohol. As you know, once again, the rule is uniform. Yeah. Let’s say it’s 17 for many states for driving, 21 for alcohol. You may recall that 50 years ago, the state of Oklahoma had experimented with decoupling it, they said men alcohol 21, women at 18. Why? Because all the data showed that the vast majority of drunken driving and dangerous driving under the, you know, that alcohol—the effects of alcohol accrue for males, not for females, something like 93%.

So, why put women at the same—apply to them the same restriction? Well, the case reached the Supreme Court, the famous case was Craig v. Boren. And the Court struck down this law, this rule on grounds of 14th Amendment equal protection: you can’t treat people differently on the basis of sex. It’s one of the—it’s a kind of a suspect classification. Dial the, or move now forward to the future and think about age of capacity that is not one for men and one for women, but different for each person based on the data that the algorithm will associate as predictive of dangerous driving or dangerous behavior.

Some people can purchase alcohol at the age of 16 or 17. Why not? They, and anyway, sit, you know, and play on their computers all day or whatever, study to want to go to Princeton. Others should wait much longer. Because even at 21, they are not ready. Some people should be allowed, let’s say at the age of 19, I’m just giving an example. But then the data will show that that needs to be adjusted upwards and their right can be denied until they reach that level of maturity.

The level of maturity that’s required might be that, you know, that underlies the algorithm code might be uniform, but it will play out differently for different people. Now, would that satisfy Supreme Court scrutiny? I think it would. It would no longer treat people just on the basis of the fact who you are, what’s your race, gender, or, you know, religion or what. There might be some correlation. It might be that women on average will have a lower age of capacity for purchasing alcohol. But not because they’re women, but because of other things, because of the, let’s say the behavior that we see that they exhibit on social media. You see the point. Begin to think about various examples.

The example that inspired Ariel to kind of in our early conversations that we talked about this, was interstate succession rules. What happens if you die without a will? Well, state law determines the allocation, and it is again, one-size-fits-all. Fair? I don’t know. I think in Illinois, 50% to the remaining spouse and 50% between the children. But we know that people have different preferences. How do we know? Look at their wills, do a survey. We know, for example, that in their wills, men, males leave about 80% to the surviving female. Whereas females leave only 40% to the surviving male. Fascinating question is why. And you know, but you see this already is playing—. Now we know that depending on age, on level of wealth, on whether it’s a first or second, or how many children, many sociological and social demographic factors, as well as others—education, cognition—affect people’s choices. The law can fairly easily absorb an algorithm that will predict what you would want, what you will write if you were to write a will.

And therefore, the rule can be, instead of one size fits all, your personalized allocation. It could be posted for you to see on an imaginary hypothetical inheritance.gov website, just put in your social security number and it’ll pop out. And if you don’t like it, you can change it. You can either opt out of it or write a will or something like that. And you can see it also change. Another child is born to the family. Maybe something will change automatically reflecting how people’s preferences change. So that’s the idea. And the book here marches down one area of the law after another to illustrate how this would happen, again, as a thought experiment. There are a lot of problems to work out. Very big questions.

Not the least is how does this align with our notions of equality and equal protection and justice? Is it okay to treat people differently? You know, we know that people are treated differently. Usually, you know, some people go to jail, others don’t, because there is a relevant factor that requires to put some people in jail and not others. Our book suggests that we should look at more relevant factors rather than fewer. And that’s kind of the gist of it. But there is, you know, there is something that is strongly counterintuitive. Anyone who traveled around the world or around the US also, if you go to Santiago de Chile and you stand outside, I’m just guessing here, the courthouse, the old courthouse, you will see a statue of Justitia, the goddess of justice. And what will the statue say? Exactly. She will be holding in one hand the scale of justice, but her eyes will be covered. She does not see the people. Personalized law requires the removal of this blindfold, not just that, but giving Justitia laser sharp ability to see into each person.

How could that—is that okay? You know, my sense is that the blindfold, I was always puzzled by the blindfold, especially in light of what I read to you from Anatole France. What does it mean to not see? It probably was, I’m sure it was in part, an expression of the sentiment that the law should not give preferential treatment to the elites. So, you don’t see if a person is poor or the person is connected, because the worry is that in the application of the rules, there would be some kind of arbitrary or preferential treatment.

I’m not even talking here about the application of the rule, talking about the rules themselves. The judge should be handcuffed to apply rules that are dictated by regulation, that is fueled by this data. And that when one party comes, they get a different treatment, a different set of consumer protection, or duties of or standards of care than other parties. The other big question that I’ll just flag, and maybe you’ll want to talk about it more is where will all this data come from, and would you really want the government to know all this about us? So, it is to issue such rules, and I don’t have an answer to that. I mean, I have all sorts of speculation where the data could come from, but not to the question, whether you really want that. It worries me to a greater extent than the phenomenon of big tech, knowing everything about us. Big tech, big brother; I’d go with big tech.

On the other hand, when you start seeing the value that can be generated by personalized rules, for example, reduction of accidents, 40,000 people every year die in road accidents in the US, more than a million around the world. We already know that the numbers can be significantly reduced by personalized treatments like those, that which is applied by insurance companies. If we bolster that, if we put that kind of personalization on steroids and also do it through, you know, accident laws and road safety law. What if we can reduce that number by half? So, 20,000 lives can be saved. Would that make us think twice before saying no, I don’t trust government with the data. Of course, it’s a long way to go to get the proper implementation of these models. And on the way there, it can be distorted in so many ways.

So, this is not a proposal that is ready for reform. If you ask me what to do tomorrow morning, if I were in charge of do this—and by the way, I was invited to Germany to sit in Berlin with a consumer protection agency for, wanted new ideas. Germans are very much at the forefront of trying to innovate in law, new ideas. And I said, let’s do personalization, but start with two things. One is the default rules. Namely, not everybody should get the same mandatory warranty. Not everybody should get the same right to, you know, not everybody should get the same implied warranties. Give people different things so that they will not have to write—so that they will fit more their needs. And also, another kind of a suggestion is to use personalized rules in the warnings and disclosures that people make.

Now, Dean Miles already mentioned, I’m the world’s greatest skeptic on warnings and disclosures. Just to mention, I wrote this book that he mentioned called More Than You Wanted to Know, arguing that when you try to help consumers, patients, people, by giving them forms with warnings, with information and make them make a better decision about your loan, about your investment, about your medical treatment, about your privacy, about your, whatever it is, your insurance. It doesn’t make-- They don’t use it. They don’t read it. They don’t know what to do with it. They can’t understand it. It doesn’t change anything. And indeed, many places that give these things under mandate by the law, have a big recycling bin right next to the counter for people to put it straight there. When next time you check into a clinic, you know, to get, you know, a doctor or something and get something to sign, notice these full bins, so it doesn’t work. But wait, maybe it can be done a little bit better. Rather than giving everyone these long disclosures, give different people different warnings.

When you open a drug, instead of folding the insert and fold and fold and open, open, and see in tiny little font, 10,000 words, all you need is the two or three things that are relevant to you. And every person has something different, different, you know, cross indication, different things to worry about. A personalized label. And maybe that would have a bit more of an effect. So, if I were to start with personalized law, I would go to areas of law lite. You know, where law-- you know, does not really impose rules, but gives all sorts of soft interventions like disclosures and default rules. But, otherwise, I can see here an area of growth at least in one direction in which artificial intelligence can replace human intuition in ways that could at least, in theory—haven’t yet tried them in practice—could increase, say our thriving and wellbeing.

Thank you. I left some extra time for questions. I hope that there is, if not, I’m always happy to continue and talk about this. But please, let’s hear from you.

Question:

Could you please comment on the likelihood of engendering a powerful political backlash? If people begin to perceive that they’re being treated unequally on the basis of criteria that they either don’t understand or don’t accept as legitimate. I’m thinking, for example, when you reference capacity, people may not understand fully or accept the validity of those sorts of distinctions. So how likely is it that these kinds of reforms would gain acceptance?

Omri Ben-Shahar:

Yeah. That’s a fantastic question. I think that initially, early on there will be widespread rejection, as we see in almost every other area, I’m working right now on a different project, trying to understand people’s rejection, the popular rejection of innovations that use AI in other areas. I mentioned the recording devices in cars that give you a safety score and change the way you drive, reduce your premium, change your premium, reduce the way you drive and how much more safety we get. California prohibits this. Most people say we are not going to have the insurance know so much about us, as if they don’t already know. You know, there is this sense that you don’t want to be-- that we live in surveillance society. And we read, you know, New York Times writes a lot about that.

People fear this future. I fear it. Nevertheless, when things begin to become part of life, we get used to it. So, I don’t know my little girl is still afraid of toilets that auto flush. Is there somebody there? So, you know, eventually we get used to things. Now I’ll say one thing about why I’m more systematic about the kind of sentiments that people will have seen that they’re treated differently. This is more than just technology and data governing our life. This is different treatment. The thing is, that in some areas, your neighbor, or your friend, or your spouse, or that other person is treated better, more leniently. And in other areas, you are treated more leniently. You might be told that you have to drive-- cannot drive as fast as that other person. On the other hand, you’re getting better consumer protection or your age of capacity for something is higher relative to--. There is a sense in which eventually it’ll be hard to know whether you are overall a winner or loser.

And to the extent that the thriving in society will improve because we give people the treatments that really fit them rather than restrain them in ways that are not suitable. We will have more liberties, less duties, and we can tailor them more narrowly to fit this. People could be in a situation where they celebrate this when we get to the end of the road. My vision is fueled from primarily thinking, not about efficiency—oh, I want a great technocratic, you know, German kind of good system—but rather as a way of individual liberty. I want us to be ready to abandon what I regard as tired myths of formal equality. We all have to be treated like tin soldiers, exactly the same, and instead be treated as individuals, not as a population, but as individuals.

Question:

If I understand your theory correctly, it probably renders my entire first year tort class with Professor Epstein, irrelevant. Have you had a chance to review your theory with Professor Epstein? And I would pay money to see you two debate it.

Omri Ben-Shahar:

Yeah. Our debates at the faculty roundtable have recently been less and less friendly. But I’m a huge admirer of Professor Epstein. Much of what brought me from Israel as a law student who was never exposed to these kinds of ideas to eventually be here at the University of Chicago are writings of Richard Epstein and Dick Posner on tort law. They have kind of blown my mind. That’s so different than how I was taught tort law. And I think that in a way I’m hoping, while I have not discussed this particular scheme with Richard Epstein, I think that there’s something in this scheme that would be highly appealing to a libertarian. And that is that it requires-- that it allows us to dial down or up legal interventions, especially mandates on our conduct, based on rational data-fueled planning.

It is no longer based on ideology, but you say if the goal of tort is to prevent accidents at reasonable cost, let’s look at what it costs you versus what it costs me and how great the risk that you pose relative to mine. Why should two people be subject to the same limit if one of them creates a risk and the other does not? I just can’t imagine that Epstein would say, oh, no, no, no, no, let’s go back to the one-size-fits-all. Now, I think that part of what happens when you talk with people who have a strong ideological background—I don’t have a strong ideological background—is that they want to know how will it come down the bottom-line bigger government or smaller government? And I think Richard Epstein would be kind of chilled person with the government becoming so big in terms of its database, its knowledge. And I would tell him, but the government can become so small in the actual rules that it implements. They don’t have to be so Draconian or so stiff. So, you have a tension here that I’m not sure how I would resolve.

Question:

Just a quick follow up, in the insurance world, there’s a lot of, in the regulatory side, there’s an enormous amount of concern about the transparency of models that predict price and their social effect. Have you thought at all about the transparency of your algorithms and how if you get into the consent of the governed, what the implications are of that?

Omri Ben-Shahar:

Yeah. Big issue in all the use of algorithms is transparency. There are laws now passed around the world—the US will probably join that movement—mandating some kind of transparency, explainability. And so at least what I see is the world is surrendering to AI and saying, but at least we want to understand it. I’m not a computer science person. I don’t understand the code of that bail example that I gave you where that that was developed. But what I know is that when I look, when we look at how judges decide bail, we don’t know exactly what’s in their mind, why this person seems they kind of look them up down, dangerous. The algorithm, you know, exactly. There will be 17 factors and they put a weight 3.7% on this, 11.2% on that. And you know exactly what it is.

And if we don’t like one of the factors, for example, education, the algorithm realize that people with lower education maybe are more risky. You can delete the code. It’s not that simple. The algorithm will find proxies for that. They will try to trick you from tricking it, but there are ways, statistical ways to resolve that. I think the nice thing about these models that use AI to replace law workers, is that they’re so transparent and so easily modifiable. Try to change the way people work. The pay judges decide, tell judges, you know, we’ve seen that there is some discrimination going on in this court. The judge would say, yeah, I realize that there is, it’s not me. It’s the others. It’s very hard for us to change, very hard for me to change and things that people tell me, oh, you know, students feel that when you say this, they are a little--. How do I change? Well, algorithm, you just change the code.

Question:

So, I guess my question is why isn’t this a call for radical deregulation and let the private sector, in various forms of interaction, govern these interactions in any way they want? They want to put in algorithms; they have the freedom to do that. Seems like the focus on the government involvement is a classic example of Demsetz’s nirvana fallacy: that only if the government is doing it, the algorithms are going to be great, but if you leave it at the private sector, they’ll be terrible. When in fact, I think probably the opposite is true.

Omri Ben-Shahar:

So, I don’t want to particularly take a stand on that. I think that there, I believe that I have these disputes with Richard Epstein. I think, for example, that we need to regulate road safety by governments, unless all roads are privatized. And then the private owners of the roads can find the optimal rules. I feel that, you know, there’s some public goods that the government has to generate, and therefore also regulate. I think that criminal law has to be done by the government. So, I’m willing to be agnostic on the set of laws that the government feels it needs to provide. All I’m saying is once you do that, maybe for the first time in the history of law, stop thinking about law as having to be uniform and start using the data revolution to separate people. Now, if we don’t have a lot of data, then create just crude separations. The duty of care in tort law, maybe not every person should have their own personalized speed limit. Maybe there should be three steps—low, medium, high—where we can more crudely put people into. But that’s the idea, is wherever you see the law as a necessary of social order, here is another, a different way to do it.

Question:

Well, thank you, Professor, for your very thought-provoking remarks. So, you used the term proxy a moment ago, and it occurred to me when you were speaking earlier about substituting a rigid age of capacity with a different consideration such as maturity, which sounds like a proxy. And I believe it was in the context of the prohibition on employers questioning female applicants on their personal family plans that Professor Posner made what was perhaps an elementary observation to our torts class that certain types of discrimination are maybe rational, but not lawful. And I can imagine a human resources manager listening to your remarks and saying, that’s a great idea. I’ll develop an algorithm that just takes into account the applicant’s commitment to long-term employment. It sounds like a plaintiff lawyer’s bonanza to me.

Omri Ben-Shahar:

And that, so put it this way, if there are factors that you can ask the algorithms to take into account that would get the company into trouble, right? In the same way that if the law were to use that for tailoring a legal command, a company that wants to engage in some kind of hiring decision or promotion decision or anything, if they use a factor that is in some way, suspect or prohibited, they might be subject to legal action against them. So, they can try to trick the law by saying, no, no, I’m not going to tell you that. Just find me the people who are most likely to stay for a long time. Scratch that, find me the people who are most likely to make me the most money. Scratch that, just maximize my profits.

Now the algorithm will be able to do that. They will figure out after time, what decisions are correlated with more success and things that they will do are first, they will collude with other companies to fix prices. We already see that happening. So, they will circumvent antitrust law because there’s nothing, no collusion, no agreement, no cartel, no smoke-filled rooms, just four gas stations in the town. Each one, putting the prices that the algorithm that was told, just maximize profits, decided, and they realized let’s figure out over time, how to not engage in price wars, and increase and keep a high monopolistic level price. They do that. They can do that in hiring. They can do that in other things. The thing is, unlike antitrust law, where everybody in antitrust law is scratching their head, how to solve this problem.

Why is it for example, that airlines, when you want to fly from Chicago to anywhere where you have two or three or four airlines, it’s always the same price. How did the--. There is no collusion. It’s just the algorithms are comparing, looking at each other and kind of elbowing each other and figuring out how to do that. So, in antitrust law, we are in trouble. We don’t know how to solve it. In discrimination law, we have the ability to resolve that is by seeing how these practices have disparate effect on different people. And to the extent that the law is willing to make actionable a hiring policy, that even though it does not treat people directly as different, it affects people differently. You’re able to do that. And it’s much easier to prove statistically, this differential impact when you have a statistical model of the decision maker, and you don’t have to collect data and to begin to figure out what happens. You just look at what the algorithm does, and you know the answer.

Question:

So, putting aside the political hurdles and the ideological hurdles that you mentioned, as well as distrust to the government and the private sector, one of the things you mentioned early on in your remarks was the example in Oklahoma and you ran into equal protection clause pretty quickly. And I wondered to what extent have you looked at the constitutional aspects of this and how you get over that huge hurdle?

Omri Ben-Shahar:

Yeah. Yeah. So, I’m not a constitutional lawyer. It was a hard part for me-- difficult, but fascinating part to study Supreme Court, equal protection doctrine to figure out what is permitted and what is not. When I teach insurance law, I show that we see that under state law insurance companies can charge people different premiums based on the risk that they pose. And they take into account—you know, Ohio, Massachusetts, state supreme courts in many places allow to use gender, sex as a factor. Of course, it’s an important factor. You know, it explains a lot in how people get into—it doesn’t explain, it is correlated with the risk, but the Supreme Court in federal cases says, no.

So, I realize that there is something to unravel here, to deconstruct and understand what’s going on. When is it allowed and when is it not? And I think the most interesting guidance was received in a case that is probably going to get overturned in this in the coming days, if not in today’s news, the affirmative action in higher education. But if you read Justice O’Connor’s decision—the five-to-four decision in the Michigan higher education cases—she says, in talking about race, the most suspect of all suspect specifications, can you take race into account? And what the court says is that the heart of the Constitution’s guarantee of equal protection, I’m reading, lies the simple command for the government must treat citizens—not equal—as individuals. You can’t put all persons of one color in one bin and treat them as not as individuals. They are not the same. They’re not all just white or just black. Treat them as individuals. And the court says the formula must be flexible enough to consider all pertinent elements and place them on the same footing of consideration. Although not necessarily according them the same weight. Look at every person, look at the aspects that are relevant for the decision and look at all these aspects, do this either holistic assessment of the candidate, but not just their sex or race, everything, including sex or race. And then make the determination based on that. This is exactly personalized law.

So, to the extent that the court believes that this is permitted to use factors that define you as an individual and treat people differently based on factors that are relevant, then, you know, I think I’m on solid footing. And I’ll just mention that I don’t think there’ll be necessary, even to those-- to a person like me, who would like to see affirmative action to some extent practiced in higher education, you don’t need to use race, don’t need to use membership in an underprivileged group. You can find the factors that distinguish a particular person and render them as deserving for somewhat elevated treatment.

Question:

Thank you very much. Really interesting observations and thoughts. Seems to me, there’s a kind of an implicit assumption here about the accuracy of this procedure. And I say procedure because there’s two aspects. One is the algorithm and the other is the training. And both of these are not transparent. You can read the algorithm afterwards, but you don’t, you suggested some examples where there was very careful gradation of the results. Are you thinking that that’s going to be a necessary state experiment before anything can be involved? How else do you deal with this kind of issue?

Omri Ben-Shahar:

Yeah, so it’s a good question. Algorithms are primarily used for things that, where they can be tested, their accuracy can be tested. So, for example, in the insurance companies, or the trademark example that I gave you, the algorithm was trained to predict what the TTB—the board, the tribunal—will decide. Then it was tested. It took a hundred cases where they didn’t know what they decided and made its predictions and 90% correct. That’s great, by the way, the 10% that the algorithm got incorrect, when you look at the actual cases, you begin to scratch your head and say, you know something, the algorithm got it right. The board made a decision inconsistent with its prior jurisprudence. Set that aside.

The point is you can test them. And when you get good testing data, good testing ratios, it’s comforting. With the kind of stuff that I talked about today—the rules that will be personalized—it’s a little harder to test whether that works better or not. We want to ask whether the consequences we want the law to create: less accidents, more informed consumers, less DUI, things like that. Do we see a reduction? And the reason I feel at least comfortable enough to recommend trying to do this is to look around and to see what—already gave the example of insurers—but how other private enterprises have managed to improve performances and outcomes metrics by using different algorithms. And then seeing where does it get them?

If insurance companies, like Progressive, or automakers, like Tesla, put these recording devices in the cars, and then measure, by the data, shows that as a result of that, driving improves: there are less accidents, there’s less cornering, less sharp breaking, less acceleration, all of the things that are associated with--. Then I think that we’ve gotten a kind of real-world experimentation and insight as to the effect of the particular algorithm. And I’m kind of, just, the power of faith-- long for the process to have a self-improving mechanism built into it, rather than a corrupting one. A process like this will reduce the effect of special interest on law. When banks go to the FTC and lobby for particular rules, you know, the rules now will be written in part as code. You can’t just kind of wrap into some kind of language industry-friendly rule. You will have to implement it. And that I think will have some kind of restraining effect.

Thomas J. Miles:

Thank you. Thank you very much, Professor Ben-Shahar. I think you’ve given us a lot to think about, and as I, you know, drive home today, I’m going to be wondering if I should implant one of those devices in my car. And I’ll be thinking about whether this future that you see coming for us is going to be a positive one, or maybe a negative one. But I think all of us can appreciate how my algorithm as dean in picking our speaker today was a very easy one. I just had to invite Professor Ben-Shahar, and I knew I got the right outcome. So please join me in thanking him again. And thank you all for coming.