Infographics of Publications in Quarterly Journal of Economics
Machine Learning as a Tool for Hypothesis Generation
Quarterly Journal of Economics, 2024
Ludwig, Jens; Mullainathan, Sendhil
While hypothesis testing is a highly formalized activity, hypothesis generation remains largely informal. We propose a systematic procedure to generate novel hypotheses about human behavior, which uses the capacity of machine learning algorithms to notice patterns people might not. We illustrate the procedure with a concrete application: judge decisions about whom to jail. We begin with a striking fact: the defendant's face alone matters greatly for the judge's jailing decision. In fact, an algorithm given only the pixels in the defendant's mug shot accounts for up to half of the predictable variation. We develop a procedure that allows human subjects to interact with this black-box algorithm to produce hypotheses about what in the face influences judge decisions. The procedure generates hypotheses that are both interpretable and novel: they are not explained by demographics (e.g., race) or existing psychology research, nor are they already known (even if tacitly) to people or experts. Though these results are specific, our procedure is general. It provides a way to produce novel, interpretable hypotheses from any high-dimensional data set (e.g., cell phones, satellites, online behavior, news headlines, corporate filings, and high-frequency time series). A central tenet of our article is that hypothesis generation is a valuable activity, and we hope this encourages future work in this largely "prescientific" stage of science.
AI-tocracy
Quarterly Journal of Economics, 2023
Beraja, Martin; Kao, Andrew; Yang, David Y.; Yuchtman, Noam
Recent scholarship has suggested that artificial intelligence (AI) technology and autocratic regimes may be mutually reinforcing. We test for a mutually reinforcing relationship in the context of facial-recognition AI in China. To do so, we gather comprehensive data on AI firms and government procurement contracts, as well as on social unrest across China since the early 2010s. We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial-recognition AI as a new technology of political control, and increased AI procurement indeed suppresses subsequent unrest. We show that AI innovation benefits from autocrats' suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets and are more likely to export their products; noncontracted AI firms do not experience detectable negative spillovers. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime's investment in AI for political control stimulates further frontier innovation.
Diagnosing Physician Error: A Machine Learning Approach to Low-Value Health Care
Quarterly Journal of Economics, 2022
Mullainathan, Sendhil; Obermeyer, Ziad
We use machine learning as a tool to study decision making, focusing specifically on how physicians diagnose heart attack. An algorithmic model of a patient's probability of heart attack allows us to identify cases where physicians' testing decisions deviate from predicted risk. We then use actual health outcomes to evaluate whether those deviations represent mistakes or physicians' superior knowledge. This approach reveals two inefficiencies. Physicians overtest: predictably low-risk patients are tested, but do not benefit. At the same time, physicians undertest: predictably high-risk patients are left untested, and then go on to suffer adverse health events including death. A natural experiment using shift-to-shift testing variation confirms these findings. Simultaneous over- and undertesting cannot easily be explained by incentives alone, and instead point to systematic errors in judgment. We provide suggestive evidence on the psychology underlying these errors. First, physicians use too simple a model of risk. Second, they overweight factors that are salient or representative of heart attack, such as chest pain. We argue health care models must incorporate physician error, and illustrate how policies focused solely on incentive problems can produce large inefficiencies.
Human Decisions and Machine Predictions
Quarterly Journal of Economics, 2018
Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals.