Aziz Huq Writes on Appeal Mechanisms for Machine-Driven Decisions

When a machine decision does you wrong, here’s what we should do

States and firms with whom we routinely interact are turning to machine-run, data-driven prediction tools to rank us, and then assign or deny us goods. For example, in 2020, the 175,000 students taking the International Baccalaureate (IB) exams for college learned that their final tests had been called off due to the COVID-19 pandemic. Instead, IB announced last July, it would estimate grades using coursework, ‘significant data analysis from previous exam sessions, individual school data and subject data’. When these synthetic grades were published, outrage erupted. Thousands signed a petition complaining that scores were lower than expected. Students and parents had no means to appeal the predictive elements of grades, even though this data-driven prediction was uniquely controversial. In the United Kingdom, a similar switch to data-driven grading for A-level exams, used for entrance to university, prompted cries of racial bias and lawsuit threats.

As the IB and A-level controversies illustrate, a shift from human to machine decision-making can be fraught. It raises the troubling prospect of a future turning upon a mechanical process from which one’s voice is excluded. Worries about racial and gender disparities persisting even in sophisticated machine-learning tools compounds these concerns: what if a machine isn’t merely indifferent but actively hostile because of class, complexion or gender identity?

Or what if an algorithmic instrument is used as a malign instrument of state power? In 2011, the US state of Michigan entered a multimillion-dollar contract to replace its computer system for handling unemployment claims. Under the new ‘MiDAS’ system implemented in October 2013, the number of claims tagged as fraudulent suddenly spiralled. Because Michigan law imposes large financial penalties on unemployment fraud, the state agency’s revenues exploded from $3 million to $69 million. A subsequent investigation found that MiDAS was flagging fraud with an algorithmic predictive tool: out of 40,195 claims algorithmically tagged as fraudulent between 2013 and 2015 (when MiDAS was decommissioned), roughly 85 per cent were false.

Read more at Psyche

Big data Innovation Internet