Evidently, we are all empiricists now. Except for me. But even I have a cool randomized field experiment in-progress with David Abrams, so I'll become an empiricist in no time, at least by some people's definition. Phase one: Collect data. Phase two: ???? Phase three: Profit.
Anyway, the Brian Leiter thread on empiricists, general frustration at identifying the right criteria for classifying empiricsts, and the subsequent comments ("My earlier post cataloguing School X's eight empirical legal scholars neglected to mention my dear friend and colleague, the multi-talented empiricist Slobotnik. Signed, mortified School X booster.") provide an opportunity to ask what sorts of empiricists should be hired in the legal academy. I recognize that the answer some people will provide is "none." I'm not addressing that crowd, though I am raising some issues that might be helpful to people who are skeptical about empiricist hiring in general on law faculties.
Here, then, are a few thoughts about how to hire entry-level quantitative empiricists with PhDs in disciplines like Political Science or Economics, as well as a coda about what many empiricists should be doing as the "field" matures. Hiring qualitative empiricists or experimentalists is a different ball of wax entirely, so I'm not really writing about those sorts of hiring decisions. My views are informed by having been a member of a law school's faculty appointments committee for most of the last decade (with trips to seven of the last ten AALS hiring conferences, for the quantitatively minded). They do not reflect the views of my institution. And my views don't match up perfectly with the way I have voted internally. I'll omit obvious advice like (a) hire smart people, and (b) fill curricular needs:
1. Ignore the findings. The legal academy probably focuses too much attention on the results of the empirical research project, particularly when hiring entry-level scholars. This is an empirically testable claim, but my impression is that entry level scholars with highly significant results do better on the market than candidates with marginally significant or null results. If this effect exists, it is largely pernicious. It rewards blind luck, it promotes the testing of questions that the empiricist already has strong intuitions about, it encourages entry-level scholars to write tons of papers (with less care) or run countless regressions until they find an interesting result, and it reinforces existing publication biases, which tend to publicize significant results and bury null results. Subject to the caveats below, we should not expect someone who achieved a highly significant result in paper A to be particularly likely to achieve a highly significant result in paper B . . . unless the scholar in question falsified data in paper A and wants to press her luck. But when you're doing entry level hiring, you really ought to care about papers B, C, and D. Which is why you should (almost) ignore paper A's findings.
Read more at PrawfsBlawg