Shining a Light on Data

From Dark Patterns to Intergovernmental Data Markets, Law School Scholars Are Changing How We Think About Technology, Privacy, and Society

Binary and programming code overlaid with outlines of a brain

Professor Lior Strahilevitz was tired of dodging dark patterns, the manipulative online sales tactics aimed at bamboozling consumers into saying yes to purchases, mailing lists, and services they would later regret. He wondered how many people were being duped.

The Internet had evolved into a minefield of hoodwink and hustle: The options to say “yes” or “maybe later” but never “no.” The confusing language: does a “cancel” button beneath “Are you sure you want to cancel?” allow you to proceed with your intended cancellation or does it cancel the cancellation? The ploy known as confirmshaming (“No thanks, I hate saving money!”), not to mention the hidden costs, preselected responses, sneaky subscription sign-ups, claims of scarcity, and goading urgency—all of it meticulously designed to exploit our psychological vulnerabilities with exacting precision.

Dark patterns, Strahilevitz thought, were annoying at best and deeply disturbing at worst: marketers now have the ability to collect massive amounts of data on which methods work best on which consumers by repeatedly deploying “A” and “B” versions of their tactics—a rapid-speed fine-tuning process that, so far, exceeds what the law is equipped to regulate.

“It’s a game of Whac-A-Mole—the people developing the dark patterns are always ahead of the people trying to stop them,” said Strahilevitz, who has studied data and privacy since he joined the Law School faculty 20 years ago. “It’s incredibly cheap and easy for programmers to change the design of a website or an app in order to do this A/B testing. And there’s a danger [because] it takes advantage of cognitive biases and exploits human weakness.”

Which is why, in early 2019, Strahilevitz and his then student, Jamie Luguri, ’19, who earned her PhD in psychology from Yale in 2015, employed some dark patterns of their own, running two experiments in which they essentially tricked hundreds of survey respondents into signing up for fake identity theft protection. (The respondents, who believed their own money was at stake but never made an actual payment, were debriefed at the end.)

The resulting paper, published in 2021, was the first in the world to prove the shocking effectiveness of dark patterns, marking an important step in efforts to mitigate the harms and serving as a striking testament to the power of interdisciplinary collaboration and empirical research. Within months, Strahilevitz, the Sidley Austin Professor of Law, was a go-to source for state, federal, and international governments seeking to understand and regulate the fast-evolving practice.

“The paper ended up being more impactful, both in the United States and around the world, than anything else I’ve ever written,” said Strahilevitz, who has advised the Federal Trade Commission, California privacy regulators, and the United Kingdom Competition and Markets Authority, among others. “Sometimes the best scholarship comes from feeling like there’s a problem in the world.”

Jamie Luguri and Lior Strahilevitz pose in front of the Pevsner statue in the Law School Reflecting Pool
Jamie Luguri, ’19, and Professor Lior Strahilevitz

Personal data—and the ability to aggregate it, trade it, and code algorithms with it—is power, and the ways in which that power is used and regulated has become a defining issue of our time. Law School scholars, long known for their willingness to take on complex challenges, are at the forefront of efforts to understand the implications and address the tangle of questions facing regulators and policymakers. Through books, papers, and conferences, they are identifying emerging issues, laying the groundwork for future debates, and helping shape new laws and regulations—often drawing on multiple disciplines, innovative research, and the signature rigor of UChicago thinking.

Professor Omri Ben-Shahar, for instance, has convened leading experts at international symposia and examined data and privacy in his own papers and books. He reimagined data regulation as a project that focuses on societal rather than personal impacts, proposing a “data pollution” paradigm that would mirror environmental regulations—a contribution aimed at spurring new, and he hopes more productive, ways of thinking about data markets. In a 2021 book, Personalized Law: Different Rules for Different People (Oxford University Press, coauthored with Ariel Porat), he explored a potential upside of personal data collection, writing about the ways in which prediction algorithms might be used to apply the law more equitably.

Assistant Professor Bridget Fahey offered new insights about how the federal government, states, and cities collect, share, and jointly manage the data they collect about their constituents. Earlier this year, she uncovered the existence of a rapidly growing and largely unregulated intergovernmental data market—a deep dive that explored not only concerns about accountability, transparency, and individual privacy but raised questions about its potential to affect our federalist structure. Although many Americans might assume that the data a government collect for a given purpose will be used only by that entity and for that purpose, Fahey’s article, “Data Federalism” (Harvard Law Review), showed that our governments trade, sell, and barter the data they collect about their constituents.

And the Luguri and Strahilevitz paper “Shining a Light on Dark Patterns” (Journal of Legal Analysis) contributed key information just as efforts to address the sneaky tactics were ramping up worldwide. Their findings were revelatory: users exposed to aggressive dark patterns were nearly four times as likely to subscribe to the (fake) service as those in the control group, a startling success rate. Those exposed to mild dark patterns were more than twice as likely to sign up. But the mild-tactic group was also less likely to react with anger and backlash—an insidious combination that makes less overt techniques a sweet spot for those looking to manipulate consumers. (This was among the insights Strahilevitz shared when advising California regulators as they prepared to implement a sweeping privacy law that takes effect in the state next year.)

In each case, the research enabled scholars—and, in some cases, their student collaborators—to help society adapt to a changing world in ways that are not always possible in other fields.

“I left psychology and came to law school because I wanted to do work that would get translated to the real world,” Luguri said. “This project really showed the way legal scholarship [can accomplish that]. We ran a study and suddenly people were paying attention to the issue, including people who have the ability to figure out how to regulate it.”

Groundbreaking Research Fueled by UChicago Values

Law School scholars are particularly well suited to the complex issues related to the data economy, topics that touch many areas of law and raise compelling questions about American society, said Dean Thomas J. Miles, the Clifton R. Musser Professor of Law and Economics.

“Our colleagues’ pathbreaking work on the data economy is an inspiring example of our faculty’s ability to define and shape entire fields,” Miles said. “Our faculty is accomplishing this in a distinctively Chicago way: focusing on important new social and legal developments, discovering and disseminating new evidence, drawing on ideas and methods from other disciplines, and offering a wide range of perspectives. It is no surprise that our faculty’s ideas on the data economy are having an immediate impact across the academy as well as on policy and the law.”

“It was a moment of joy as a scholar and horror as a citizen—consumers have been taken to the cleaners for so long, and we’d discovered something previously known only within the firms that created these dark patterns.”

Lior Strahilevitz
 

Strahilevitz, Ben-Shahar, and Fahey, of course, are not alone; other Law School scholars have trained their expertise on issues related to the data economy in recent years. In his 2018 book, Radical Markets: Uprooting Capitalism and Democracy for a Just Society (Princeton University Press, coauthored with E. Glen Weyl), Professor Eric Posner, the Kirkland & Ellis Distinguished Service Professor of Law and an expert on market systems, pitched the creation of data labor unions that would enable people to be paid for sharing the personal data that collectively powers the digital economy. Professor Randal C. Picker, the James Parker Hall Distinguished Service Professor of Law and an expert on the regulation of platforms and networks, has written and spoken on competition and digital markets. For instance, at a 2019 conference in Paris, he discussed how traditional ideas about competition and pricing might not fit online markets in which users essentially “pay” for a service by giving up personal data. (“If you’re paying in data, how do you know how much you’re paying?” Picker told the audience. “I just handed €2 to someone for this Diet Coke, and I knew exactly how much I was paying. But how much did I ‘pay’ this morning to Twitter in data when I was posting a bunch of tweets about this conference? I just don’t know.”) And in a 2021 paper, “The Public Trust in Data” (Georgetown Law Journal), Professor Aziz Huq, a scholar of US and comparative constitutional law who has written extensively on issues related to equality and democratic backsliding, proposed the creation of public trusts that would address harms—such as privacy losses, economic exploitation, and structural inequalities—caused by the collection of personal and locational data. The trusts, to be managed by the state, would recognize the collective interest in how personal data is used.

“We ran a study and suddenly people were paying attention to the issue, including people who have the ability to figure out how to regulate it.”

Jamie Luguri, ’19
 

These works reflect quintessential UChicago values: pushing against boundaries, questioning accepted thinking, and stretching across disciplines. Many also incorporate labor-intensive research.

When Fahey, for instance, interrogated how our governments legally structure their varied and sprawling data transfers, she found that those exchanges were not regulated by statutes or administrative regulations but instead by an unusual kind of contract—what she termed “intergovernmental agreements.” Those agreements specify what information each government will surrender to the other, the terms of use, and any restrictions on further transfer. They are also often the only legal document that has the capacity to protect the privacy interests of the data’s subjects.

To better understand the legal frameworks those documents created, Fahey devoted months to collecting intergovernmental data agreements—digging through the minutes of city council meetings, scouring the depths of agency websites, and filing Freedom of Information Act requests.

“Intergovernmental agreements are often informally negotiated between the federal government and the states,” said Fahey, an expert in federalism. “They are not a robust form of public lawmaking that can transparently and democratically account for the profound interests at stake when sensitive data changes governmental hands.”

Bridget Fahey sits outside the Law School speaking with a person out of focus
Professor Bridget Fahey

In addition to publishing their own work, Law School scholars have played an important role in larger conversations about technology, privacy, and the data economy. Strahilevitz, for instance, chaired the Subcommittee on Privacy and Data Protection for Chicago Booth’s George J. Stigler Center’s Committee on Digital Platforms in 2019, contributing to both a conference and the resulting white paper, whose coauthors included Luguri and Filippo Lancieri, JSD ’21. Ben-Shahar, the Leo and Eileen Herzel Professor of Law and the Kearney Director of the Coase-Sandor Institute for Law and Economics, spearheaded several symposia in recent years, among them “Legal Challenges of the Data Economy” in Paris in 2019, “Big Data and the Law” in Paris in 2017, and “Contracting over Privacy,” co-organized with Strahilevitz, at the Law School in 2015. The data economy, Ben-Shahar told the Paris audience in 2019, had introduced “new possibilities that are sometimes hard to resist”—and those possibilities demanded study and discussion.

“Dialogues like this,” he told the assembled experts, “are intended to help us figure out whether we want to forge ahead or maybe pause and hold on to some of the more traditional ways in which we regulated our society.”

For many Law School scholars, these concerns are of paramount concern because they involve foundational values: equality, power, and how we function as a nation. As a result, many of the questions are ones without easy answers, Ben-Shahar said. And despite recent growth, he added, we are only at the beginning—and our response may well require shifts in thinking that can be informed by legal scholarship.

“The use of Big Data and artificial intelligence is only going to grow, and rapidly,” he said. “Many of the regulatory solutions may be outdated by the time they are enacted. This is why part of my research focuses on how the law itself, and law enforcement agencies in particular, could also deploy the tools of data science to improve their performance.”

Imagining the Future

Ben-Shahar is the first to say that much of his recent data economy work has a “science fiction” feel to it.

He can envision a world in which personalized speed limits are delivered directly to each driver and consumer protections target those who need them most, all thanks to predictive algorithms fueled by large databases of personal information. The privacy trade-offs, he argues in Personalized Law: Different Rules for Different People, are worthwhile if they advance equality and save lives.

“Think how this is used by private markets to save lives,” he said. “Auto insurers, for example, offer drivers the option to install tracking devices that measure how people drive and charge them according to their safety score. Studies show that this technology reduces fatal accidents by over 30 percent—12,000 lives can be saved every year by nothing more than a data program. Why should the state of California bar such excellent innovation?” (California, he noted, prohibits the use of these data-collection devices in auto insurance.)

Ben-Shahar can also envision a world in which we primarily regard the collection of data not as a threat to individual privacy, but as a pollutant similar to the toxic emissions of a car or factory—one that harms the ecosystem more than any single person whose data is shared.

“A central problem in the digital economy has been largely ignored: how the information given by people affects others, and how it undermines and degrades public goods and interests,” he wrote in his 2019 paper “Data Pollution” (Journal of Legal Analysis), citing prominent examples such as massive leaks of consumers’ personal financial data and consulting firm Cambridge Analytica’s use of Facebook data to target and potentially sway voters during the 2016 election.

But this collective interest in data markets is tricky—and it will likely get trickier as technology continues to evolve. First, it isn’t all negative interest. There are potential benefits mixed in with the challenges: the ability to track the spread of disease during a global pandemic, solve crimes, or—perhaps someday—custom-tailor the law. Not everyone will agree on how to balance those interests.

Second, threats to democracy, equality, and society feel distant and nebulous. Threats to our personal privacy, on the other hand, feel . . . personal.

“That’s what we focus on and feel afraid of—the idea that information about our personal habits, attributes, histories, preferences, and tastes is being taken from us,” Ben-Shahar said. “It’s creepy.”

But there is evidence that despite our handwringing—Facebook knows which products we like! Our Fitbits know when our heart rates rise! Our iPhones know where we’ve been!—we don’t actually care as much as we say we do. Most of us, after all, routinely and knowingly trade our individual privacy for conveniences, entertainment, and other relatively small perks. Researchers call this discrepancy between stated intention and actual behavior the “privacy paradox.” Among the papers that hinted at it was one published in 2016 by Strahilevitz and his former student Matthew Kugler, ’15. They found that even when consumers rated a practice as highly intrusive, they believed themselves to have authorized the intrusion—even if the privacy policy they signed was vague. Moreover, according to that paper, “Is Privacy Policy Language Irrelevant to Consumers?” (Journal of Legal Studies), few consumers were willing to pay for an alternative service that would allow them to avoid the intrusion.

“There’s this prevailing sense in society that there is a problem with data privacy, and yet people behave as if there isn’t,” Ben-Shahar said. “So maybe the problem is not about the private sphere. Maybe it’s not that our own lives will be destroyed or diminished or devalued [when we allow access to our personal data]. Maybe it’s a problem with the entire environment, a problem that’s about the public sphere.”

In fact, he and others have argued, the societal costs of personal data collection often exceed the sum of the individual costs.

Omri Ben-Shahar sits next to a printed disclosure, pages taped end to end, stretching from the ceiling on the 3rd floor to a table on the 2nd floor.
Professor Omri Ben-Shahar has been writing about data and privacy for years. In his 2014 book, More Than You Wanted to Know: The Failure of Mandated Disclosure, he argued that disclosures, including the consent forms people sign before giving up their personal information, are useless because nobody reads them. One reason, as shown here with a printed version of a disclosure: they are far too long. He argues that regulations should focus on societal impacts rather than individual privacy.

Ben-Shahar points to the Cambridge Analytica scandal as illustration. An attempt to sway an election by targeting voters through data-driven advertising creates collective effects; in 2016, he said, those effects were felt by the “entire electoral and political environment.” Many of the individual voters, however, regarded themselves as personally unharmed.

“Our social ecology gets polluted by how our data are being used,” Ben-Shahar said. “[Many of us] feel this general discomfort—we don’t feel injured, and we can’t exactly put our finger on what the worst thing is, but there seems to be a collection of bad things that are brewing.”

The discomfort, he hypothesized, isn’t about our personal space, it’s the nagging sense that our societal structure is at risk.

Ecosystem involvement is further evident, he said, in the failure of private law to meaningfully regulate the data economy. Tort law has fallen short because, often, individual harms are neither immediate nor visible, the external harms are too widespread to control through typical remedies, and costs are difficult to measure. Contract law has fallen short because agreements between individuals do not capture collective costs—and because people do not always make rational choices, particularly when faced with manipulative choice architecture. Mandated privacy disclosures rarely work because people do not read them.

“We’ve been asking the wrong questions, and we’ve been using the wrong legal tools,” Ben-Shahar said.

“We have to rethink the harms the data economy creates and the way they have to be regulated,” he has written. “Social intervention should focus on the external harms to society at large from collection and misuse of personal data, rather than restrict its focus to privacy and data security. Perhaps it is time for an ‘environmental law for data protection.’”

He advocates thinking about data sharing in terms of “emissions”—externalities that can be regulated much as we regulate environmental pollution. That could include strict limits on certain types of data activity, Pigouvian data taxes, and a compensation structure to address “data spills.”

His model’s focus on societal impacts also allows for the consideration of “data greens”—the potentially beneficial side effects, such as data-driven personalization.

“The use of Big Data and artificial intelligence is only going to grow, and rapidly.”

Omri Ben-Shahar
 

But personalized law, for all its potential, is the part society may not be ready to ponder—yet. In addition to requiring technology that isn’t fully available and a consensus around the structure, content, and goals of the predictive algorithms, personalized law would require a certain amount of comfort with privacy trade-offs. The most effective, accurate, and fair tailoring would likely require a lot of algorithmic input; otherwise, each individual piece of information would carry too much weight, allowing for inaccuracies and manipulation.

“Personalized law—that’s asking a lot of my audience,” he said with a chuckle. “It’s a brave new world and there would be a lot of problems to consider.”

His intention, however, isn’t to provoke but to prepare: ideas that feel futuristic now might not always feel that way. His goal is to create a longer runway by introducing the ideas now, before we need to make decisions.

After all, the news has been filled in recent years with revelations that remind us—sometimes quite jarringly—how powerful data markets have become and how deeply we need to examine the many implications.

Understanding a New Source of Government Power

In 2019, a story appeared on the front page of The Washington Post: agents from the Federal Bureau of Investigation were using facial recognition technology to scan the civil driver’s license photos of hundreds of millions of Americans without their knowledge or consent. Neither Congress nor state legislatures had authorized the data-sharing effort, which granted the federal government access to state DMV records.

A furor erupted, with Democratic and Republican lawmakers, civil rights advocates, and others expressing shock at the size and secrecy of the database, the use of civil data for criminal investigations, and the experimental nature of the FBI’s facial recognition technology.

Fahey, who joined the Law School faculty in 2020, recognized the story as part of a larger picture that was unfolding—one in which data are quietly traded, concentrated, and amplified across levels of government with potentially enormous implications.

“If the public understood the infrastructure of [intergovernmental] data sharing more crisply, it would find much more to be concerned about.”

Bridget Fahey
 

Individual data, after all, become far more powerful in aggregate, and often move in ways people might not expect. For instance, data collected by a city social service entity may make its way into a federal immigration database. Data collected by state election officials may be shared with dozens of sister states. The federal governments, states, and cities even create joint “data pools”—databases to which they all contribute and which they all jointly manage.

Although the risks of data aggregation have been scrutinized in the private sector, where corporations routinely sell the data they collect about their customers without their consent, the sale and trade of data among governmental entities had largely escaped notice until Fahey’s article.

From a constitutional federalism perspective, this cross-governmental data exchange is conceptually challenging, she said, because the basic goal of federalism is to divide power among many different governments. But where data power is concerned, Fahey worries that federalism is facilitating the concentration of governmental power, not its division.

“Federalism multiplies the number of governmental entities that can access and collect individual data. Intergovernmental data markets, in turn, allow them share and pool their respective data stores into ever larger compilations,” Fahey explained.

“Instead of Chicago police officers having access just to data collected by the Chicago Police Department, for example, they have access through an intergovernmental data pool to data collected by police officers in jurisdictions throughout the country,” she said. “Data exchange multiplies governmental power.”

The data often move easily and quietly—until there’s a dispute between governments, as happened when the federal government tried to force “sanctuary cities” like Chicago to share data with Immigration and Customs Enforcement, or until the public discovers an exchange that strikes people as particularly invasive.

Fahey’s work suggests that the rare examples that gain public notoriety are only the tip of the iceberg. “If the public understood the infrastructure of this data sharing more crisply, it would find much more to be concerned about,” she said.

Her work, she hopes, will offer a framework for understanding the issue as it continues to evolve. Among the positive feedback she has received has come from researchers who also struggled to track down data that were being shared among governments; her efforts to gather examples of intergovernmental data agreements have proven both illuminating and validating.

Her future projects in this area involve deeper investigations of both crime and immigration data sharing, as well as an exploration of how the technology behind governmental data sharing works, a project that could include collaboration with experts in computer science.

This sort of interdisciplinary exploration is part of what makes the Law School’s contributions so rich. Strahilevitz, too, has collaborated recently with several computer science professors at Chicago as well as an economist researching consumers’ behavioral responses to data breaches.

And, of course, he has collaborated with social psychologists. Dark patterns, after all, find their power partly in technology—and partly in the recesses of the human mind.

The Intersection of Law, Experimental Psychology, and Computer Science

Strahilevitz still remembers when Luguri shared with him the results of their first dark patterns experiment.

It was a moment many months in the making, and one that began almost as an afterthought. At the time, Strahilevitz had some other privacy projects underway, including one involving Luguri. First, they were working together on a paper, “Consumertarian Default Rules” (Law and Contemporary Problems, 2019), which included an original study of consumer privacy expectations. In addition, Strahilevitz was chairing the Subcommittee on Privacy and Data Protection for the Stigler Center’s Committee on Digital Platforms, and he and others were interested in including something about dark patterns. The question was, where and how would they gather new information?

Dark patterns were of growing interest to academics, but the body of research was still young. Most of the existing literature was written by computer scientists who had developed algorithmic tools for detecting them.

“From a psychology perspective, there hadn’t been much work yet on the effectiveness of dark patterns, and there wasn’t a lot of legal literature that explored what existing laws could do and what additional laws might be needed,” Strahilevitz said. “People inside these companies [that create dark patterns] knew a lot about them, but of course they weren’t telling the world what they knew.”

As Strahilevitz brainstormed with subcommittee colleagues, it occurred to him: the privacy preferences questionnaire he and Luguri were developing for “Consumertarian Default Rules” provided a perfect cover story for testing the effectiveness of dark patterns. They could build their experiment into that survey.

Strahilevitz was no stranger to either social psychology or student collaboration. Several years earlier, he had invited Matthew Kugler, ’15, then a JD student with a PhD in social psychology from Princeton, to team up. (Among their joint works was the 2016 paper about vague privacy policy language, the one that revealed consumers’ tendency to believe they were agreeing to intrusion.) When Strahilevitz met Luguri—who during her time at the Law School helped develop the Law School’s Psychology and Law Studies Lab—he found another terrific collaborator.

She was curious and brilliant, and her psychology training was, as Strahilevitz put it, “world class.”

Omri Ben-Shahar and Lior Strahilevitz pose together in a Law School classroom
Professors Ben-Shahar and Strahilevitz before their 2015 conference, “Contracting over Privacy.”

Drawing on the expertise she had developed as a doctoral student at Yale, she and Strahilevitz designed an experiment that would test people’s actual behavior when unknowingly faced with dark pattern marketing. It worked like this: At the end of the privacy preferences survey, the participants were told that their responses identified them as people who really cared about privacy. That’s when the survey became—or seemed to become—a sales pitch for an identity-theft protection service. Offers were delivered in one of three ways: through aggressive dark pattern marketing, through mild dark pattern marketing, and, for the control group, through a neutral offer that lacked manipulative tactics.

Luguri conducted the empirical analysis and then shared the results with Strahilevitz: aggressive dark patterns worked almost four times as well as the neutral offers, and the mild dark patterns worked more than twice as well but without significantly alienating consumers the way the aggressive tactics did.

“You spend four or five months planning an experiment and when you launch it, you never know what you’re going to get. When the results of this first experiment came through, we just had this moment of, ‘Wow, this is going to make a big impact once we tell everyone,’” Strahilevitz said. “It was a moment of joy as a scholar and horror as a citizen—consumers have been taken to the cleaners for so long, and we’d discovered something previously known only within the firms that created these dark patterns.”

Interested in supplementing these findings with additional insight, Strahilevitz and Luguri conducted a second experiment before completing their paper. That one yielded data about which dark patterns worked best—and it replicated a surprising finding from the first experiment: when dark patterns manipulate a consumer into making a purchase, the cost of the service doesn’t matter. Even a substantial increase in price—from $8.99 to $38.99 per month, for instance—fails to dampen the effects of the dark pattern. (This surprised Strahilevitz, who has written a number of law and economics papers, more than Luguri, with her psychology training.) The experiments also produced other important information, including that less educated consumers were most vulnerable to dark pattern tactics.

The findings quickly drew attention not only from the media and other academics but from lawmakers and regulators working to address the use of dark patterns. That spring, Strahilevitz discussed the work with the Federal Trade Commission, which had been working to ramp up enforcement. The following fall, the FTC released a policy statement warning of legal action against companies with sign-up processes that failed to provide clear information, obtain consumers’ informed consent, or make cancellation easy.

Strahilevitz also spoke with California privacy regulators working to develop the state’s first regulations on dark patterns, a legal development that is expected to have national implications. In addition to informal conversations, in March 2022, he testified about the dangers of dark patterns before the board of the California Privacy Protection Agency as they prepared for the January 2023 implementation of the California Privacy Rights Act (CPRA). The CPRA, a ballot initiative that was approved by California voters in 2020, treats consent secured via dark patterns as legally ineffective. The accompanying regulations, due out this year, are expected to curb web designs that might interfere with a user’s ability to make clear decisions about their data.

“The stuff that’s happening in California is really important, and not just because California would be the fifth biggest economy in the world if it were its own country, but because a lot of technology companies are making the decision to give consumers in all 50 states the same kind of Internet that California’s going to get,” Strahilevitz. “Most companies don’t want to have multiple versions of their platform or their website or their app. So all eyes should be on California right now.”

Dark patterns have been a hot topic around the world in recent months: in July the European Parliament approved the Digital Services Act, which would prohibit dark patterns. Following ratification by the Council of the European Union, the new provisions will go into effect in 2024. In addition to his work in California, Strahilevitz has been called upon to share his research numerous times. In May 2022, he served on a panel discussing dark patterns before the United Kingdom Competition and Markets Authority, and next year he will present his research before the UK’s Financial Conduct Authority. In addition, earlier this year, Strahilevitz and a reading group of UChicago law, public policy, and graduate computer science students provided comments to the European Data Protection Board about proposed dark pattern regulations. He also helped draft portions of the bipartisan American Data Privacy and Protection Act (ADPPA), which the House Energy and Commerce Committee approved in July by a margin of 53–2.

The success of the dark patterns work, Strahilevitz said, underscores the powerful impact of a Law School culture that prizes interdisciplinary inquiry, innovation, and an openness to student collaboration. Strahilevitz continues to talk with Luguri, who now works as a litigation associate in the Los Angeles office of Munger, Tolles & Olson, and with Kugler, who is now a tenured professor at Northwestern Pritzker School of Law. Strahilevitz also continues to engage current students interested in work related to law and technology. This academic year, he will teach an Advanced Topics in Privacy and Data Security seminar with Aloni Cohen, a UChicago assistant professor of computer science and data science, that brings law students together with students working toward PhDs in computer science.

The opportunities to collaborate and discover have been invigorating, but perhaps most gratifying has been the opportunity to help address a pressing issue.

“It’s been really neat to have done a piece of academic research that’s not only well regarded by professors but also speaks to the world that we live in,” Strahilevitz said. “It’s helping policymakers and regulators try to build a better Internet.”

Artificial intelligence Big data Privacy