Q&A for Panel 1: Algorithmic Law, moderated by Laure Lavorel

Presented at the Legal Challenges of the Data Economy conference, March 22, 2019.

Transcript

LAURE LAVOREL: We can take few question from the audience if you have some.

I would start with one for both of you. My experience being an international lawyer is, almost in every places in the world, the judges have an interpretation power, if I may say. With this personalization law program, I would say, isn't it a risk that the judge would see the interpretation power reduced, because the principle would not be the principle? The data screening would have done the job, and then the judge would have no room for interpretation.

ARIEL PORAT: No more than two minutes to respond also to some of the questions. It was very quickly, because I want to hear your questions but some--

So, first of all, thank you very much for these excellent comments. And, of course, very thoughtful and we will certainly think more of them but-- so very quick response.

First thing is about equality. So depending what you mean by equality-- so take the negligence example. So here suppose we have two potentially injurious, right? Now under current law, they have the same standard of care. Everything else is equal, they have the same standard of care. Now suppose that for one of them, which is very unskillful, it's really a huge burden to meet the standard of care, while for the other one it's very easy because he's very, very skillful. This is the law. Is it equal or not? I doubt.

With personalization it might be, so equality might be advanced here. It doesn't mean that you might not be right in other contexts, but we should be very sensitive about this word.

The second point that you have mentioned, influence, is about values. I completely agree. Sometimes we have, certainly mandatory rules, but even default rules, which are-- they have some value in it. It's not just about saving transaction costs, what we sometimes call economics terms. It's more than that, right?

So, for example, take the intestacy example, the inheritance law. You could imagine that in some countries-- I'm not sure, but is it true that in France, there is a kind of mandatory rule that says that at least a certain percentage of the estate should go to the children?

LAURE LAVOREL: Absolutely. Yes.

ARIEL PORAT: I'm proud of knowing that.

[LAUGHTER]

I am right. OK, good. Now so suppose-- I think it's one third? Am I right, like at least one third? Anyway, surely I shouldn't push my luck.

[LAUGHTER]

But suppose we think that the default rule is not just a matter of fact, so there is some values here. So imagine that there is a default rule in a jurisdiction in which you can opt out and allocate the estate any way you like. But still, there might be a value that half would go to the children. Again, people might opt out and change it, but still there is a value that a default will say so. Fine with me

As long as the wills of the parties has some relevance, more than zero, then you can think of personalization. Suppose there is an issue of values and also the parties' preferences. So you could think that the values should be 50% in general, but also the wills of the parties or the preference of the parties matter. So take both into account as long as the parties' wills or the parties' desires is relevant, personalization should matter even if values matter. If it's only values, and the parties' preferences are completely irrelevant, then personalization is irrelevant too.

The third point is maybe the last one because I don't want to spend-- privacy of course is an issue. We can maybe discuss it later-- about certainty and uncertainty. I think this is a very good point and let me respond briefly.

Think again about negligence law. So suppose there is a standard rule, a rule saying that you should drive your car as the reasonable person do. Suppose I drive a car here in Paris. What exactly do I know about the reasonable person in Paris? I think it's much easier for me to do what is reasonable to me I think it's much more informative to me to know what is reasonable for me, knowing my own skills, knowing the inherent risks that I create with my driving.

You know there is a kind of I would say-- how to say it-- we pretend is thinking that if there is a uniform rule, everybody knows the uniform rule, and so on and so forth. But this is wrong. Because, especially in negligence, you need so much information to know how the reasonable person would drive a car. You know much more about yourself, so I think this is just starting to respond to this question, but, of course, sometimes, of course, that you might be right.

So I think your question is about the role of judges, right? So suppose that in a future world, there would be algorithms, and in contractual cases, suppose that courts would not have to interpret any more because everything would be known. So that probably could be one of the consequences.

The question isn't if it's bad or good. I don't say that it's necessarily good. It could be bad or good. It's a different question. I think that, in a way, it might be maybe more accurate in certain areas. Right? So if the judge tried to interpret what exactly the party wanted to do, what were their preferences when they entered into the contract, what was the subjective intention and so on, maybe it would be just more accurate if they have more solid data driven from big data, or have an algorithm to do it much more accurately.

LAURE LAVOREL: Thank you very much. Any question from the audience? We are running late a little, but I think we can take two questions. Yeah, two, three.

AUDIENCE: I have a question whether the whole idea of personalization of law is not unnecessarily provocative because I think it's not so revolutionary as it looks like. The question is-- it sounds like it would be then the law different for different persons. But I think this is not what you're meaning. It's more about a personalized application of a general law, and you use information from the context and from each person in the application of the law.

But this is what we are already doing all the time. So from competition law, as I'm a competition policy person, we have the old debate in the US about per se rules versus the rule of reason. And the rule of reason, which is 100 years old in US antitrust law, is really about taking into account specific context in which firms are really, for example, applying a certain business practice. And it depends on the effect of this context whether this business practice isn't allowed or not. And this means that different firms, so one firm might be allowed to do something and the other firm are not allowed to do the same thing.

But you need information about this. That's exactly what of the more economic approach was about in European competition policy. And so in that respect big data only gives us more information for doing exactly this. Is this the right interpretation or was it is more revolutionary what you're doing?

LAURE LAVOREL: I think you're very right. Yeah. Absolutely.

ARIEL PORAT: Yeah. I agree to, well, maybe 99% of what you have said. Well, that's to be cautious and not to agree completely with what you have said. Yeah, I think this is a possible way to put it. But again, of course that if personalization is done in the way that we suggest, the law would look different. So think again with the example I started with. As far as I know, there is no any jurisdiction in the world in which-- well, I don't know all the jurisdictions, of course, but I would guess that there are no jurisdictions in the world in which intestacy law would really make a difference according to the identity, the characteristics and so on of--

So you could say in a broad sense that once we have more information-- so the concept of the law is still the same and it's a way to implement it differently. I completely agree with that. So another way to put it, It's about, as you said, it's about information. Once we didn't have so much information. Now we start having a lot of information, so personalization might be feasible. As long it's not a matter of principle not to personalize and just a matter of feasibility, then why not personalize if we can? Again, I understand that there should be some normative issues, and then we should take it into account.

LAURE LAVOREL: So two question, and then we have to break. So one's here. Yeah

AUDIENCE: So thank you for this presentation. I understand it's in a very liberal context. So could you comment about your view of the Chinese social scoring, which is personalization in a very different context?

ARIEL PORAT: Would you say a bit more? In what way they personalized Chinese law?

AUDIENCE: They use data about behavioral patterns and comments from other people to give rights or delete rights from individuals.

ARIEL PORAT: Yeah. More generally, let me try in a bit different way, because I don't want to speak about a legal system that I don't really know, but I hope that it will be kind of a response. I think when it's about people's constitutional rights, we should be very, very careful here.

It's not impossible to think about personalization of constitutional law. Again, I am not sure that this is something we want to do, but imagine a legal system that allocate constitutional rights according to people's preferences. Think about it as kind of a limited budget. Suppose that your country is willing to invest x amount of money, x dollars or euros or whatever, to protect your constitutional rights. One way to do it, uniform across the board-- everybody would have exactly the same level of constitutional rights.

Now but suppose there are some people who care much about free speech, and some others that care much about privacy, and maybe others that care about, you name it, any kind of freedom. So maybe make different bundles of constitutional rights to people and maybe everybody would be better off.

Of course, this is strange to you and even to me, and I'm sure to [INAUDIBLE] too. So there are normative issues here that we understand. It doesn't mean that-- I think it's thought provoking. Maybe in some spheres it makes some sense. Maybe in certain areas it makes more sense than in others, but I agree that there are limits to what we can do with personalization.

LAURE LAVOREL: Thank you. One last question. On the front, I think.

AUDIENCE: Yes, thank you very much. I'm just a bit concerned about the discussion. So that's why I understand we are not talking about the personalization of the application of the law, but the personalization of the law itself.

So we have a 500 years of history whereby we tried to objectivize law in order to avoid subjective arbitrary power. So now if we are talking about data, we can have very granular data. We can segment the population and know them based on sex, age, social profile, and so on. If we are going that way, personalizing the law itself, the rule, who is making the decision, what is the limit in terms of segmentation, and basically, based on what?

ARIEL PORAT: OK. I'll do it briefly. Segmentation of the law-- it's an issue. So even from a legal perspective, maybe think about the role of precedent. So the court decides a certain issue, and in those countries in which there is no stare decisis that it applies also to other cases, with personalization maybe you cannot use it anymore. And that's also part of segmentation of the law. I'm not sure that this is good enough reason not to do it but certainly might have-- this is the kind of additional cost that we might have.

But about algorithm drivenness and so on, again, it's the question about how the algorithm is good enough. That's, I think, the point that also we have heard in a previous lecture. So algorithms might be terrible sometimes, and then I believe that they should be improved. And then at the end, I'm not sure whether it's more or less arbitrary than the decision of a court of or a judge. Think about it. We are, I think, all lawyers, or most of us here, I believe, lawyers. Am I right or not?

LAURE LAVOREL: Yeah. You're right.

ARIEL PORAT: OK, most of us are lawyers. So we got used to it. But think about this very strange idea for any outsider, that two people have a dispute among themselves about anything, and then sometimes in many jurisdiction-- I don't-- jury systems. But think about those legal systems, like in Israel, and I think also here, that sometimes one judge, one person after all, would have to decide who is right and who is wrong. I'm not sure that this is necessarily better than having a very sophisticated algorithm.

So think about criminal law. Suppose you are accused of an offense, and suppose you know-- suppose there is a data saying that even if you are innocent, there is 10% chances that you would go to jail because the judge might be wrong. Suppose now that an algorithm, the error of mistake is only 1%. 1% well versus 10%.

Still, it's an algorithm. There is a transparency issue. We don't exactly know what happened. 1% here or a judge that would argue and give a-- although I know that in France, court decisions are very short, I know. But in other places that are longer. I'm not sure that I prefer the judge under those circumstances. Do you? OK. So it's a dumb question.

LAURE LAVOREL: So I think we would need the coffee and the break to think about that. Thank you very much.

Big data