Race in the Machine: Racial Disparities in Health and Medical AI

What does racial justice—and racial injustice—look like with respect to artificial intelligence in medicine (“medical AI”)? This Article offers that racial injustice might look like a country in which law and ethics have decided that it is unnecessary to inform people of color that their health is being managed by a technology that likely encodes the centuries of inequitable medical care that people of color have received. Racial justice might look like an informed consent process that is reformed in light of this reality. This Article makes this argument in four Parts. Part I canvases the deep and wide literature that documents that people of color suffer higher rates of illness than their white counterparts while also suffering poorer health outcomes than their white counterparts when treated for these illnesses. Part II then provides an introduction to AI and explains the uses that scholars and developers predict medical AI technologies will have in healthcare, focusing specifically on the management of pregnancy. Part III subsequently serves as a primer on algorithmic bias—that is, systematic errors in the operation of an algorithm that result in a group being unfairly advantaged or disadvantaged. This Part argues that we should expect algorithmic bias that results in people of color receiving inferior pregnancy-related healthcare, and healthcare generally, because medical AI technologies will be developed, trained, and deployed in a country with striking and unforgivable racial disparities in health.

Part IV forms the heart of the Article, making the claim that obstetricians, and healthcare providers generally, should disclose during the informed consent process their reliance on, or consultation with, medical AI technologies that likely encode inequities. To be precise, providers should have to tell their patients that an algorithm has informed the recommendation that the provider is making; moreover, providers should inform their patients how racial disparities in health may have impacted the algorithm’s accuracy. It supports this argument by recounting the antiracist, anti-white supremacist—indeed radical—origins of informed consent in the Nuremberg Trials’ rebuke of Nazi “medicine.” This Part argues that the introduction into the clinical encounter of medical AI—and the likelihood that these technologies will perpetuate racially inequitable healthcare while masking the same—is an invitation to reform the informed consent process to make it more consistent with the commitments that spurred its origination. This Part proposes that, given the antiracist roots of the doctrine of informed consent, it would be incredibly ironic to allow the informed consent process to permit a patient—and, particularly, a patient of color—to remain ignorant of the fact that their medical care is being managed by a device or system that likely encodes racism. This Part argues that informing patients about the likelihood of race-based algorithmic bias—and the reasons that we might expect race-based algorithmic bias—may, in fact, be a prerequisite for actually transforming the inequitable social conditions that produce racial disparities in health and healthcare.

Introduction

As artificial intelligence (“AI”) technologies proliferate across sundry sectors of society—from mortgage lending and marketing to policing and public health—it has become apparent to many observers that these technologies will need to be regulated to ensure both that their social benefits outweigh their social costs and that these costs and benefits are distributed fairly across society. In October 2022, the Biden Administration announced its awareness of the dangers that “technology, data, and automated systems” pose to individual rights.1.See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House Off. of Sci. & Tech. Pol’y, https://www.white‌house.gov/ostp/ai-bill-of-rights/ [https://perma.cc/E5GS-6ZP3] (last visited Jan. 5, 2024). Some states and cities have also initiated efforts to regulate AI. See, e.g.,Laura Schneider, Debo Adegbile, Ariella Feingold & Makenzie Way, NYC Soon to Enforce AI Bias Law, Other Jurisdictions Likely to Follow, WilmerHale (Apr. 10, 2023), https://www.wilmerhale.com/insights/client-alerts/‌20230410-nyc-soon-to-enforce-ai-bias-law-other-jurisdictions-likely-to-follow [https://perm‌a.cc/K47J-XZUQ] (“New York City’s Department of Consumer and Worker Protection (DCWP) is expected to begin enforcing the City’s novel artificial intelligence (AI) bias audit law on July 5, 2023. This law prohibits the use of automated decision tools in employment decisions within New York City unless certain bias audit, notice, and reporting requirements are met.”); Jonathan Kestenbaum, NYC’s New AI Bias Law Broadly Impacts Hiring and Requires Audits, Bloomberg Law (July 5, 2023, 5:00 AM), https://news.bloomberglaw.com/‌us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits [https://perm‌a.cc/L94C-X3BN] (observing that the “New Jersey Assembly is considering a limit on use of AI tools in hiring unless employers can prove they conducted a bias audit,” that “Maryland and Illinois have proposed laws that prohibit use of facial recognition and video analysis tools in job interviews without consent of the candidates,” and that “the California Fair Employment and Housing Council is mulling new mandates that would outlaw use of AI tools and tests that could screen applicants based on race, gender, ethnicity, and other protected characteristics”); Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, State of Cal. Dep’t of Just. Off. of the Att’y Gen. (Aug. 31, 2022), https://oag.‌ca.gov/‌news/press-releases/attorney-general-bonta-launches-inquiry-racial-and-ethnic-bias-healthca‌re [https://perma.cc/ERC4-GVJJ] (“California Attorney General Rob Bonta today sent letters to hospital CEOs across the state requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in commercial decision-making tools. The request for information is the first step in a DOJ inquiry into whether commercial healthcare algorithms—types of software used by healthcare providers to make decisions that affect access to healthcare for California patients—have discriminatory impacts based on race and ethnicity.”).Show More Through its Office of Science and Technology Policy, the Administration declared the need for a coordinated approach to address the problems that AI technologies have generated—problems that include “[a]lgorithms used in hiring and credit decisions [that] have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination,” “[u]nchecked social media data collection [that] has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity,” and, most germane to the concerns of this Article, “systems [that are] supposed to help with patient care [but that] have proven unsafe, ineffective, or biased.”2.See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, supra note 1.Show More

As an initial measure in the effort to eliminate—or, at least, contain—the harms that automation poses, the Administration offers a Blueprint for an AI Bill of Rights, which consists of “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”3.Id.Show More Crucially, the Blueprint identifies “notice and explanation” as a central element in a program that protects the rights of individuals in an increasingly automated society.4.Id.Show More That is, the Biden Administration proposes that in order to ensure that AI does not threaten “civil rights or democratic values,” individuals should be informed when “an automated system is being used,” and they should “understand how and why it contributes to outcomes that impact” them.5.Id.Show More To apply it to the context to which this Article is most attuned, if a hospital system or healthcare provider relies upon an AI technology when making decisions about a patient’s care, then the patient whose health is being managed by the technology ought to know about the technology’s usage.

Although the Biden Administration appears committed to the idea that an individual’s rights are violated when they are unaware that an AI technology has had some impact on the healthcare that they have received, many actors on the ground, including physicians and other healthcare providers, do not share this commitment. As one journalist reports:

[T]ens of thousands of patients hospitalized at one of Minnesota’s largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients [have] any idea about the AI involved in their care. That’s because frontline clinicians . . . generally don’t mention the AI whirring behind the scenes in their conversations with patients.6.Rebecca Robbins & Erin Brodwin, An Invisible Hand: Patients Aren’t Being Told About the AI Systems Advising Their Care, STAT (July 15, 2020), https://www.statnews.com/‌2020/07/15/artificial-intelligence-patient-consent-hospitals/ [https://perma.cc/R3F5-NNX4].Show More

This health system is hardly unique in its practice of keeping this information from patients. “The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects who see little value . . . in raising the subject.”7.Id.Show More Moreover, while these actors see few advantages associated with informing patients that AI has informed a healthcare decision or recommendation, they see lots of disadvantages, with the disclosure operating as a “distraction” and “undermin[ing] trust.”8.Id.Show More

We exist in a historical moment in which the norms around notice and consent in the context of AI in healthcare have not yet emerged—with some powerful actors in the federal government proposing that patients are harmed when they are not notified that AI has impacted their healthcare, and other influential actors on the ground proposing that patients are harmed when they are notified that AI has impacted their healthcare.9.See also Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, supra note 1 (understanding as problematic the fact that some AI tools used in healthcare “are not fully transparent to healthcare consumers”); cf. Schneider et al., supra note 1 (noting that New York City’s law regulating AI in employment requires an employer to provide “applicants and employees who reside in New York City notice of its use of AI in hiring and/or promotion decisions, either via website, job posting, mail or e-mail”). Interestingly, some investigations have shown that some patients do not want to know when physicians and hospital administrators rely on medical AI when managing their healthcare. See Robbins & Brodwin, supra note 6 (reporting that some patients who were interviewed stated that “they wouldn’t expect or even want their doctor to mention” the use of medical AI and stating that these patients “likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years”). However, other studies have shown that patients do desire this information. See Anjali Jain et al., Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias with Use of Health Care Algorithms, JAMA Health F., June 2, 2023, at 10, https://jamanetwork.com/journals/jama-health-forum/fullarticle/2805595 [https://perma.cc/9FMK-E4VV] (discussing a “recent, nationally representative survey” that showed that “patients . . . wanted to know when [AI] was involved in their care”).Show More As we think about the shape that these norms ought to take, this Article implores us to keep in mind the fact of racial inequality and the likelihood that AI will have emerged from, and thereby reflect, that racial inequality. Indeed, this Article’s central claim is that the well-documented racial disparities in health that have existed in the United States since the dawn of the nation demand that providers inform all patients—but especially patients of color—that they have relied on or consulted with an AI technology when providing healthcare to them.

Although much has been written about AI in healthcare,10 10.Indeed, volumes have been written about algorithmic bias, what AI technologies mean with respect to data privacy, and how we ought to regulate AI inside the medical context. See generally The Oxford Handbook of Digital Ethics (Carissa Véliz ed., 2021).Show More or medical AI, very little has been written about the effects that medical AI can and should have on the informed consent process.11 11.See I. Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108 Geo. L.J. 1425, 1428 (2020) (noting that his Article, which was published just three years ago, was “the first to examine in-depth how medical AI / [machine learning] intersects with our concept of informed consent”).Show More Moreover, no article to date has interrogated what the reality of racial disparities in health should mean with respect to obtaining a patient’s informed consent to a medical intervention (or nonintervention) that an AI system has recommended. This Article offers itself as the beginning of that conversation. It makes the case that we ought to reform the informed consent process to ensure that patients of color are aware that their health is being managed by a technology that likely encodes the centuries of inequitable medical care that people of color have received in this country and around the world.

The Article proceeds in four Parts. Part I canvases the deep and wide literature that documents that people of color suffer higher rates of illness than their white counterparts while also suffering poorer health outcomes than their white counterparts when treated for these illnesses. These racial disparities in health are also present in the context of pregnancy, a fact that is illustrated most spectacularly by the often-quoted statistic describing black women’s three- to four-fold increased risk of dying from a pregnancy-related cause as compared to white women.12 12.Elizabeth A. Howell, Reducing Disparities in Severe Maternal Morbidity and Mortality, 61 Clinical Obstetrics & Gynecology 387, 387 (2018).Show More Part II then provides an introduction to AI and explains the uses that scholars and developers predict medical AI technologies will have in healthcare and, specifically, the management of pregnancy. Part III subsequently serves as a primer on algorithmic bias—that is, systematic errors in the operation of an algorithm that result in a group being unfairly advantaged or disadvantaged. This Part explains the many causes of algorithmic bias and gives examples of algorithmic bias in medicine and healthcare. This Part argues that we should expect algorithmic bias from medical AI that results in people of color receiving inferior healthcare. This is because medical AI technologies will be developed, trained, and deployed in a country with striking and unforgivable racial disparities in health.

Part IV forms the heart of the Article. It begins by asking a question: Will patients of color even want medical AI? There is reason to suspect that significant numbers of them do not. Media attention to the skepticism with which many black people initially viewed COVID-19 vaccines has made the public newly aware of the higher levels of mistrust that black people, as a racial group, have toward healthcare institutions and their agents. That is, the banality of racial injustice has made black people more suspicious of medical technologies. This fact suggests that ethics—and justice—require providers to inform their patients of the use of a medical technology that likely embeds racial injustice within it.

The Part continues by making the claim that healthcare providers should disclose during the informed consent process their reliance on medical AI. To be precise, providers should have to tell their patients that an algorithm has affected the providers’ decision-making around the patients’ healthcare; moreover, providers should inform their patients how racial disparities in health may have impacted the algorithm’s predictive accuracy. This Part argues that requiring these disclosures as part of the informed consent process revives the antiracist, anti-white supremacist origins of the informed consent process. To be sure, the practice of informed consent originated in the Nuremberg Trials’ rebuke of Nazi medicine. These defiant, revolutionary origins have been expunged from the perfunctory form that the informed consent process has taken at present. Resuscitating the rebelliousness that is latent within informed consent will not only help to protect patient autonomy in the context of medical AI but may also be the condition of possibility for transforming the social conditions that produce racial disparities in health and healthcare. That is, the instant proposal seeks to call upon the rebellious roots of the doctrine of informed consent and use it as a technique of political mobilization. A short conclusion follows.

Two notes before beginning: First, although this Article focuses on medical AI in pregnancy and prenatal care, its argument is applicable to informed consent in all contexts—from anesthesiology to x-rays—in which a provider might utilize a medical AI device. Concentrating on pregnancy and prenatal care allows the Article to offer concrete examples of the phenomena under discussion and, in so doing, make crystal clear the exceedingly high stakes of our societal and legal decisions in this area.

Second, the moment that a provider consults a medical AI device when delivering healthcare to a patient of color certainly is not the first occasion in that patient’s life in which racial disenfranchisement may come to impact the healthcare that they receive. That is, we can locate racial bias and exclusion at myriad sites within healthcare, medicine, and the construction of medical knowledge well before a clinical encounter in which medical AI is used. For example: people of color are underrepresented within clinical trials that test the safety and efficacy of drugs—a fact that might impact our ability to know whether a drug actually is safe and effective for people of color.13 13.See The Nat’l Acads. of Scis., Eng’g & Med., Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups 24 (Kirsten Bibbins-Domingo & Alex Helman eds., 2022), https://nap.nationalacademies.org/‌catalog/26479/improving-representation-in-clinical-trials-and-research-building-research‌-equity [https://perma.cc/FE2H-9YC5] (explaining that “research has demonstrated that many groups underrepresented and excluded in clinical research can have distinct disease presentations or health circumstances that affect how they will respond to an investigational drug or therapy” and that “[s]uch differences contribute to variable therapeutic responses and necessitate targeted efficacy and safety evaluation”). An FDA report of clinical trials that took place between 2015 and 2019 revealed that while non-Hispanic white people constituted only 61% of the general population in the United States, they were 78% of trial participants. See id. at 35; see also id. at 44–45 (“Even recently completed trials have failed to include enrollment consistent with the distribution of disease across the population—a Phase 2 trial of crenezumab in Alzheimer’s disease with 360 participants across 83 sites in 6 countries reported 97.5 percent of participants being white, and only 2.8 percent of all participants being Hispanic.”). Notably, clinical trials only rarely include pregnant and lactating people. See id. at 40. This means that when most medications are introduced into the market, their safety and efficacy vis-à-vis pregnant and lactating people are unknown—although it is quite common for people to take medications while pregnant or lactating. See id. (“During pregnancy and lactation, greater than 90 percent of these individuals take at least one medication, either to treat pregnancy-related complications or to treat ongoing medical issues.”).Show More For example: the National Institute of Health (“NIH”) and the National Science Foundation (“NSF”) fund medical research conducted by investigators of color at lower rates than that conducted by white investigators14 14.See Christine Yifeng Chen et al., Meta-Research: Systemic Racial Disparities in Funding Rates at the National Science Foundation, eLife, Nov. 29, 2022, at 2, https://doi.org/10.7554/‌eLife.83071 [https://perma.cc/NFS8-T3LB] (showing that the National Science Foundation funded proposals by white principal investigators at +8.5% of the average funding rate while funding proposals by Asian, black, and Native Hawaiian/Pacific Islander principal investigators at 21.2%, 8.1%, and 11.3% of the average funding rate, respectively); Donna K. Ginther et al., Race, Ethnicity, and NIH Research Awards, 333 Science 1015, 1016 (2011), https://doi.org/10.1126/science.1196783 [https://perma.cc/NQA9-LYMG] (showing that the National Institute of Health funded proposals by black principal investigators at close to half the rate as white principal investigators).Show More—a fact that might contribute to the underfunding of medical conditions that disproportionately impact people of color. For example: most medical schools still approach race as a genetic fact instead of a social construction, with the result being that most physicians in the United States have not been disabused of the notion that people of color—black people, specifically—possess genes and genetic variations that make them get sicker and die earlier than their white counterparts.15 15.See Christina Amutah et al., Misrepresenting Race—The Role of Medical Schools in Propagating Physician Bias, 384 New Eng. J. Med. 872, 873–74 (2021). Funding for research into the imagined genetic causes of racial disparities in health outcomes vastly outstrips funding for research into social determinants of health or the physiological effects of stress and racism on people of color. Shawn Kneipp et al., Trends in Health Disparities, Health Inequity, and Social Determinants of Health Research, 67 Nursing Rsch. 231, 231 (2018). See also René Bowser, Racial Profiling in Health Care: An Institutional Analysis of Medical Treatment Disparities, 7 Mich. J. Race & L. 79, 114 (2001) (arguing that “physicians who focus on racism as opposed to cultural peculiarities or the genetic basis of disease are likely to be considered both as not ‘real scientists’ and as dangerous” and stating that producing research that explains racial disparities in health outcomes in terms of culture and genes, as opposed to structural racism and inherited disadvantage, “enhances the researcher’s status”). This funding disparity undoubtedly contributes to the perpetuation of the myth of biological race.Show More For example: pulse oximeters, which use infrared light to measure an individual’s blood saturation levels, are so common as to be called ubiquitous, even though it is well-known that the devices do not work as well on more pigmented skin.16 16.See Haley Bridger, Skin Tone and Pulse Oximetry: Racial Disparities in Care Tied to Differences in Pulse Oximeter Performance, Harv. Med. Sch. (July 14, 2022), https://hms.‌harvard.edu/news/skin-tone-pulse-oximetry [https://perma.cc/HZW8-YMAS].Show More For example: most clinical studies that are used to establish evidence-based practices are conducted in well-resourced facilities, making their generalizability to more contingently equipped and more unreliably funded facilities uncertain.17 17.See The National Academies of Sciences, Engineering, and Medicine, supra note 13, at 25 (observing that “[c]linical research is often performed in well-resourced tertiary care sites in large urban centers, and may have limited applicability to community sites, less well-resourced safety net settings, and rural settings”).Show More For example: many research studies do not report their findings by race, thereby impeding our ability to know whether the studies’ results are equally true for all racial groups.18 18.See id. at 31 (stating that the “[l]ack of representative studies on screening for cancer or cardiometabolic disease may lead to recommendations that fail to consider earlier ages or lower biomarker thresholds to start screening that might be warranted in some populations” and observing that “due to [a] lack of studies that report findings by race,” the guidelines for some screenings are universal, although there is some evidence that they should vary by race and age).Show More And so on. If providers ought to notify their patients (especially their patients of color) that the provider has relied upon medical AI when caring for the patient, then it is likely true that providers similarly ought to notify their patients about racial inequity in other contexts as well. That is, there is a compelling argument that when a provider prescribes a medication to a patient, they might need to notify the patient that preciously small numbers of people who were not white cisgender men participated in the clinical trial of the medication.19 19.See Barbara A. Noah, Racial Disparities in the Delivery of Health Care,35 San Diego L. Rev. 135, 152 (1998) (noting that “[b]efore the National Institutes of Health (NIH) issued a directive in 1990, investigators almost uniformly tested new chemical entities only on white male subjects”).Show More There is a compelling argument that when a provider tells a black patient that the results of her pulmonary function test were “normal,” they might also need to inform that patient that if she were white, her results would be considered “abnormal,” as the idea that the races are biologically distinct has long informed notions of whether a set of lungs is healthy or not.20 20.See Lundy Braun, Breathing Race into the Machine: The Surprising Career of the Spirometer from Plantation to Genetics, at xv (2014).Show More There is a compelling argument that when a provider affixes a pulse oximeter to the finger of a patient of color, they might also need to inform that patient that the oximeter’s readings may be inaccurate—and the care that she receives based on those readings may be inferior21 21.See Bridger, supra note 16 (describing a study that showed that pulse oximeters reported blood oxygen saturation levels for patients of color that were higher than what they actually were, leading these patients’ providers to give them supplemental oxygen at lower rates).Show More—given the widely known and undisputed fact that such devices do not work as well on darker skin. There is a compelling argument that when a physician tells a pregnant patient laboring in a safety net hospital that the evidence-based practice for patients presenting in the way that she presents is an artificial rupture of membranes (“AROM”) to facilitate the progression of the labor, they might also need to inform the patient that the studies that established AROM as an evidence-based practice were conducted in well-funded research hospitals that were affiliated with universities.22 22.See, e.g., Alan F. Guttmacher & R. Gordon Douglas, Induction of Labor by Artificial Rupture of the Membranes, 21 Am. J. Obstetrics & Gynecology 485, 485 (1931) (establishing artificial rupture of the membranes as an evidence-based practice in obstetrics after studying the safety and efficacy of the procedure among patients cared for at a clinic affiliated with Johns Hopkins University).Show More There is a compelling argument that when a physician tells a forty-year-old black patient that he does not need to do a screening for colorectal cancer until age forty-five, they might also need to inform the patient that the studies that established forty-five as the age when such screenings should commence did not report their findings by race.23 23.See Screening for Colorectal Cancer: US Preventive Services Task Force Recommendation Statement, 325 JAMA 1965, 1970 (2021), https://jamanetwork.com/jour‌nals/jama/fullarticle/2779985 [https://perma.cc/TV68-6W75].Show More And so on.

It does not defeat this Article’s claim to observe that racial bias and exclusion are pervasive throughout medicine and healthcare and that providers in many contexts outside of the use of medical AI ought to notify patients how this bias and exclusion may affect the healthcare that they are receiving. Indeed, it is seductive to claim in those other contexts that it is better to fix the inequities in the healthcare than to tell patients of color about them—a fact that is also true in the context of medical AI. However, fixing the inequities in healthcare in those other contexts and telling patients about them are not mutually exclusive—a fact that is also true in the context of medical AI. And as Part IV argues, telling patients about the inequities in those other contexts might be the condition of possibility of fixing the inequities—a fact that is also true in the context of medical AI.

Essentially, this Article’s claim may be applied in a range of circumstances. In this way, this Article’s investigation into how algorithmic bias in medical AI should affect the informed consent process is simply a case study of a broader phenomenon. This Article’s insights vis-à-vis medical AI are generalizable to all medical interventions and noninterventions.

  1.  See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House Off. of Sci. & Tech. Pol’y, https://www.white‌house.gov/ostp/ai-bill-of-rights/ [https://perma.cc/E5GS-6ZP3] (last visited Jan. 5, 2024). Some states and cities have also initiated efforts to regulate AI. See, e.g., Laura Schneider, Debo Adegbile, Ariella Feingold & Makenzie Way, NYC Soon to Enforce AI Bias Law, Other Jurisdictions Likely to Follow, WilmerHale (Apr. 10, 2023), https://www.wilmerhale.com/insights/client-alerts/‌20230410-nyc-soon-to-enforce-ai-bias-law-other-jurisdictions-likely-to-follow [https://perm‌a.cc/K47J-XZUQ] (“New York City’s Department of Consumer and Worker Protection (DCWP) is expected to begin enforcing the City’s novel artificial intelligence (AI) bias audit law on July 5, 2023. This law prohibits the use of automated decision tools in employment decisions within New York City unless certain bias audit, notice, and reporting requirements are met.”); Jonathan Kestenbaum, NYC’s New AI Bias Law Broadly Impacts Hiring and Requires Audits, Bloomberg Law (July 5, 2023, 5:00 AM), https://news.bloomberglaw.com/‌us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits [https://perm‌a.cc/L94C-X3BN] (observing that the “New Jersey Assembly is considering a limit on use of AI tools in hiring unless employers can prove they conducted a bias audit,” that “Maryland and Illinois have proposed laws that prohibit use of facial recognition and video analysis tools in job interviews without consent of the candidates,” and that “the California Fair Employment and Housing Council is mulling new mandates that would outlaw use of AI tools and tests that could screen applicants based on race, gender, ethnicity, and other protected characteristics”); Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, State of Cal. Dep’t of Just. Off. of the Att’y Gen. (Aug. 31, 2022), https://oag.‌ca.gov/‌news/press-releases/attorney-general-bonta-launches-inquiry-racial-and-ethnic-bias-healthca‌re [https://perma.cc/ERC4-GVJJ] (“California Attorney General Rob Bonta today sent letters to hospital CEOs across the state requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in commercial decision-making tools. The request for information is the first step in a DOJ inquiry into whether commercial healthcare algorithms—types of software used by healthcare providers to make decisions that affect access to healthcare for California patients—have discriminatory impacts based on race and ethnicity.”).
  2.  See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, supra note 1.
  3.  Id.
  4.  Id.
  5.  Id.
  6.  Rebecca Robbins & Erin Brodwin, An Invisible Hand: Patients Aren’t Being Told About the AI Systems Advising Their Care, STAT (July 15, 2020), https://www.statnews.com/‌2020/07/15/artificial-intelligence-patient-consent-hospitals/ [https://perma.cc/R3F5-NNX4].
  7.  Id.
  8.  Id.
  9.  See also Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, supra note 1 (understanding as problematic the fact that some AI tools used in healthcare “are not fully transparent to healthcare consumers”); cf. Schneider et al., supra note 1 (noting that New York City’s law regulating AI in employment requires an employer to provide “applicants and employees who reside in New York City notice of its use of AI in hiring and/or promotion decisions, either via website, job posting, mail or e-mail”).

    Interestingly, some investigations have shown that some patients do not want to know when physicians and hospital administrators rely on medical AI when managing their healthcare. See Robbins & Brodwin, supra note 6 (reporting that some patients who were interviewed stated that “they wouldn’t expect or even want their doctor to mention” the use of medical AI and stating that these patients “likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years”). However, other studies have shown that patients do desire this information. See Anjali Jain et al., Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias with Use of Health Care Algorithms, JAMA Health F., June 2, 2023, at 10, https://jamanetwork.com/journals/jama-health-forum/fullarticle/2805595 [https://perma.cc/9FMK-E4VV] (discussing a “recent, nationally representative survey” that showed that “patients . . . wanted to know when [AI] was involved in their care”).

  10.  Indeed, volumes have been written about algorithmic bias, what AI technologies mean with respect to data privacy, and how we ought to regulate AI inside the medical context. See generally The Oxford Handbook of Digital Ethics (Carissa Véliz ed., 2021).
  11.  See I. Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108 Geo. L.J. 1425, 1428 (2020) (noting that his Article, which was published just three years ago, was “the first to examine in-depth how medical AI / [machine learning] intersects with our concept of informed consent”).
  12.  Elizabeth A. Howell, Reducing Disparities in Severe Maternal Morbidity and Mortality, 61 Clinical Obstetrics & Gynecology 387, 387 (2018).
  13.  See The Nat’l Acads. of Scis., Eng’g & Med., Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups 24 (Kirsten Bibbins-Domingo & Alex Helman eds., 2022), https://nap.nationalacademies.org/‌catalog/26479/improving-representation-in-clinical-trials-and-research-building-research‌-equity [https://perma.cc/FE2H-9YC5] (explaining that “research has demonstrated that many groups underrepresented and excluded in clinical research can have distinct disease presentations or health circumstances that affect how they will respond to an investigational drug or therapy” and that “[s]uch differences contribute to variable therapeutic responses and necessitate targeted efficacy and safety evaluation”). An FDA report of clinical trials that took place between 2015 and 2019 revealed that while non-Hispanic white people constituted only 61% of the general population in the United States, they were 78% of trial participants. See id. at 35; see also id. at 44–45 (“Even recently completed trials have failed to include enrollment consistent with the distribution of disease across the population—a Phase 2 trial of crenezumab in Alzheimer’s disease with 360 participants across 83 sites in 6 countries reported 97.5 percent of participants being white, and only 2.8 percent of all participants being Hispanic.”).

    Notably, clinical trials only rarely include pregnant and lactating people. See id. at 40. This means that when most medications are introduced into the market, their safety and efficacy vis-à-vis pregnant and lactating people are unknown—although it is quite common for people to take medications while pregnant or lactating. See id. (“During pregnancy and lactation, greater than 90 percent of these individuals take at least one medication, either to treat pregnancy-related complications or to treat ongoing medical issues.”).

  14.  See Christine Yifeng Chen et al., Meta-Research: Systemic Racial Disparities in Funding Rates at the National Science Foundation, eLife, Nov. 29, 2022, at 2, https://doi.org/10.7554/‌eLife.83071 [https://perma.cc/NFS8-T3LB] (showing that the National Science Foundation funded proposals by white principal investigators at +8.5% of the average funding rate while funding proposals by Asian, black, and Native Hawaiian/Pacific Islander principal investigators at 21.2%, 8.1%, and 11.3% of the average funding rate, respectively); Donna K. Ginther et al., Race, Ethnicity, and NIH Research Awards, 333 Science 1015, 1016 (2011), https://doi.org/10.1126/science.1196783 [https://perma.cc/NQA9-LYMG] (showing that the National Institute of Health funded proposals by black principal investigators at close to half the rate as white principal investigators).
  15.  See Christina Amutah et al., Misrepresenting Race—The Role of Medical Schools in Propagating Physician Bias, 384 New Eng. J. Med. 872, 873–74 (2021). Funding for research into the imagined genetic causes of racial disparities in health outcomes vastly outstrips funding for research into social determinants of health or the physiological effects of stress and racism on people of color. Shawn Kneipp et al., Trends in Health Disparities, Health Inequity, and Social Determinants of Health Research, 67 Nursing Rsch. 231, 231 (2018). See also René Bowser, Racial Profiling in Health Care: An Institutional Analysis of Medical Treatment Disparities, 7 Mich. J. Race & L. 79, 114 (2001) (arguing that “physicians who focus on racism as opposed to cultural peculiarities or the genetic basis of disease are likely to be considered both as not ‘real scientists’ and as dangerous” and stating that producing research that explains racial disparities in health outcomes in terms of culture and genes, as opposed to structural racism and inherited disadvantage, “enhances the researcher’s status”). This funding disparity undoubtedly contributes to the perpetuation of the myth of biological race.
  16.  See Haley Bridger, Skin Tone and Pulse Oximetry: Racial Disparities in Care Tied to Differences in Pulse Oximeter Performance, Harv. Med. Sch. (July 14, 2022), https://hms.‌harvard.edu/news/skin-tone-pulse-oximetry [https://perma.cc/HZW8-YMAS].
  17.  See The National Academies of Sciences, Engineering, and Medicine, supra note 13, at 25 (observing that “[c]linical research is often performed in well-resourced tertiary care sites in large urban centers, and may have limited applicability to community sites, less well-resourced safety net settings, and rural settings”).
  18.  See id. at 31 (stating that the “[l]ack of representative studies on screening for cancer or cardiometabolic disease may lead to recommendations that fail to consider earlier ages or lower biomarker thresholds to start screening that might be warranted in some populations” and observing that “due to [a] lack of studies that report findings by race,” the guidelines for some screenings are universal, although there is some evidence that they should vary by race and age).
  19.  See Barbara A. Noah, Racial Disparities in the Delivery of Health Care, 35 San Diego L. Rev. 135, 152 (1998) (noting that “[b]efore the National Institutes of Health (NIH) issued a directive in 1990, investigators almost uniformly tested new chemical entities only on white male subjects”).
  20.  See Lundy Braun, Breathing Race into the Machine: The Surprising Career of the Spirometer from Plantation to Genetics, at xv (2014).
  21.  See Bridger, supra note 16 (describing a study that showed that pulse oximeters reported blood oxygen saturation levels for patients of color that were higher than what they actually were, leading these patients’ providers to give them supplemental oxygen at lower rates).
  22.  See, e.g., Alan F. Guttmacher & R. Gordon Douglas, Induction of Labor by Artificial Rupture of the Membranes, 21 Am. J. Obstetrics & Gynecology 485, 485 (1931) (establishing artificial rupture of the membranes as an evidence-based practice in obstetrics after studying the safety and efficacy of the procedure among patients cared for at a clinic affiliated with Johns Hopkins University).
  23.  See Screening for Colorectal Cancer: US Preventive Services Task Force Recommendation Statement, 325 JAMA 1965, 1970 (2021), https://jamanetwork.com/jour‌nals/jama/fullarticle/2779985 [https://perma.cc/TV68-6W75].

Dynamic Tort Law: Review of Kenneth S. Abraham & G. Edward White, Tort Law and the Construction of Change: Studies in the Inevitability of History

Rarely does a book—let alone one on torts—come along with true staying power. Tort Law and the Construction of Change is such a book. It stopped me in my tracks when I first read it, and it has been a book to which I have returned again and again while teaching torts and probing new research projects. With Tort Law and the Construction of Change, Professors Kenneth Abraham and G. Edward White, who have inspired generations of torts students and scholars,1.As UVA Law Dean Risa Goluboff remarked at the UVA Law book panel Festschrift for Professors Abraham and White:[They] have been anchors of this faculty for a long time, maybe longer than you realize. They have been on this faculty for a combined total of nearly 90 years, both of them spending most of their professional lives here . . . . Over the past 10 years or so, they have both taught torts to generations of UVA Law students among other things.Transcript of UVA Law Book Panel at 2 (Sept. 22, 2022) (on file with the Virginia Law Review) [hereinafter Transcript].Show More have truly energized and inspired this nearly twenty-year veteran in the field.

Abraham and White explore the past, present, and future of tort law through a historical, theoretical, and pragmatic lens seeking to excavate and explicate how doctrines evolve. Their central thesis is that “[c]ontinuity arises in part out of linking current decisions, even if they are innovative and constitute an expansion of liability, to the principles expressed or implied in prior precedents,”2.Kenneth S. Abraham & G. Edward White, Tort Law and the Construction of Change: Studies in the Inevitability of History 206 (2022).Show More and that “external pressure for change in established common law doctrines is almost always filtered through received doctrinal frameworks.”3.Id. at 213.Show More I pay tribute to their book in this Essay, with equal parts praise (Part I), quibbling (Part II), and prodding for roads not taken (Part III).4.Here, I build upon remarks I made at the UVA Law book panel. See Transcript, supra note 1, at 13 (“I have three points I want to make. The first is going to be some praise. There’s a lot that’s praiseworthy in the book. The second is going to be a quibble, and the third is going to be a thought about the future.”).Show More

  1.  As UVA Law Dean Risa Goluboff remarked at the UVA Law book panel Festschrift for Professors Abraham and White:

    [They] have been anchors of this faculty for a long time, maybe longer than you realize. They have been on this faculty for a combined total of nearly 90 years, both of them spending most of their professional lives here . . . . Over the past 10 years or so, they have both taught torts to generations of UVA Law students among other things.

    Transcript of UVA Law Book Panel at 2 (Sept. 22, 2022) (on file with the Virginia Law Review) [hereinafter Transcript].

  2.  Kenneth S. Abraham & G. Edward White, Tort Law and the Construction of Change: Studies in the Inevitability of History 206 (2022).
  3.  Id. at 213.
  4. Here, I build upon remarks I made at the UVA Law book panel. See Transcript, supra note 1, at 13 (“I have three points I want to make. The first is going to be some praise. There’s a lot that’s praiseworthy in the book. The second is going to be a quibble, and the third is going to be a thought about the future.”).

Harmonizing Federal Immunities

When a federal employee is charged with a state crime based on conduct that was within their official responsibilities, the United States Constitution protects them from prosecution through Supremacy Clause immunity. This immunity was developed by the Supreme Court in a small set of cases from around the turn of the twentieth century, but no Supreme Court cases have mentioned it since. Generally, as lower courts have construed it, it is a highly protective standard. This Note questions that standard by attempting to re-align Supremacy Clause immunity with another federal immunity that also derives from the Supremacy Clause: federal tax immunity. Until the mid-twentieth century, federal tax immunity cases protected the federal government from almost any state-tax-related burdens, even indirect ones. But in 1937, the Supreme Court abruptly changed course and overruled a century of its previous precedents. As a result, federal tax immunity today has only a shadow of its previous force. In relating these two immunities to each other, this Note aims to shine light on Supremacy Clause immunity as a doctrine based on an outdated conception of the role of federal courts in our federalist system. It ties the Court’s shift in federal tax immunity to a broader philosophical transformation that also appeared in other doctrines, like those governing the application of the Tenth Amendment and preemption. And it shows that Supremacy Clause immunity as it currently stands is the sour note in an otherwise consistent harmony of federalist relationships.

Introduction

In two disconnected and hypothetical1.Only partially hypothetical, one is in Idaho. SeeIdaho v. Horiuchi, 253 F.3d 359, 363–64 (9th Cir. 2001).Show More locations, two government officers in performance of their duties run afoul of a state criminal law. One is an FBI sniper who takes an arguably unjustified shot at a fleeing man and kills an innocent bystander. The other is a state police officer who, facing the same situation, makes the same tragic error. Both officers are charged with a crime: involuntary manslaughter. Assuming all relevant facts are parallel between the two scenarios, does the law dictate that the state police officer should stand trial while the federal officer is held to be immune from prosecution? More generally, given the structure of our federalist system and the text, purpose, and history of the United States Constitution, how often should it be the case that a federal officer is immune from state criminal prosecution despite the fact that a state officer would be held to be culpable for doing the very same thing?

Courts tell us that this question is answered by the Constitution’s Supremacy Clause.2.U.S. Const. art. VI, cl. 2 (“This Constitution, and the Laws of the United States . . . shall be the supreme Law of the Land . . . any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.”).Show More But the Supreme Court has not been generous with its guidance. The concept of federal officer immunity from state criminal prosecution was first explored in In re Neagle,3.135 U.S. 1, 62 (1890).Show More but although that case is memorable for its remarkably dramatic set of facts,4.See id.at 45 (“As [the former Chief Justice] was about leaving the room, . . . he succeeded in drawing a bowie-knife, when his arms were seized by a deputy marshal and others present to prevent him from using it, and they were able to wrench it from him only after a severe struggle.”).Show More it is well over a century old and offers little in the way of specifics. After an initially rapid development, Supremacy Clause immunity has remained entirely untouched by the Supreme Court since 1920, and it has arisen in lower federal courts only sporadically during that intervening century. Though no clear legal standard has emerged, the doctrine has generally been construed to offer sweeping immunity to federal employees who commit state crimes, as long as their actions bore some relationship to their federal duties.5.The standard that has developed in lower courts is discussed in Subsection I.B, infra.Show More

Despite its infrequent appearance in federal courts, Supremacy Clause immunity may have unexpected contemporary significance. Scholars have pointed out that the historical periods when it is most likely to arise are times when there are strong political tensions between state and federal governments.6.See Seth P. Waxman & Trevor W. Morrison, What Kind of Immunity? Federal Officers, State Criminal Law, and the Supremacy Clause, 112 Yale L.J. 2195, 2232 (2003) (stating that Supremacy Clause immunity tends to arise “around historical moments of significant friction between the federal government and the States”).Show More In areas as disparate as electoral policy,7.Nick Corasaniti & Reid J. Epstein, A Voting Rights Push, as States Make Voting Harder, N.Y. Times (Jan. 11, 2022), https://www.nytimes.com/2022/01/11/us/politics/biden-voting-rights-state-laws.html [https://perma.cc/39MC-2PR7] (describing that eighteen states are passing laws containing “a host of new voting restrictions” while Democrats in Congress try to pass a bill prohibiting state laws with those very types of restrictions).Show More public health,8.See Nancy J. Knauer, The COVID-19 Pandemic and Federalism: Who Decides?, 23 N.Y.U. J. Legis. & Pub. Pol’y 1, 8 (2020) (arguing that the current federal-state collaborative approach to pandemic response “left the federal government ill-prepared to respond to the COVID-19 pandemic because of conflicting priorities”); James G. Hodge, Jr., Federal vs. State Powers in Rush to Reopen Amid Coronavirus Pandemic, Just Sec. (Apr. 27, 2020), https://www.justsecurity.org/69880/federal-vs-state-powers-in-rush-to-reopen-amid-corona‌virus-pandemic/ [https://perma.cc/62LX-4B2G] (“[T]he novel coronavirus is exposing a deep rift in American federalism as federal and state governments vie for primacy in remedying the nation’s ills.”).Show More immigration,9.SeeArizona v. United States, 567 U.S. 387, 416 (2012) (holding, in a suit filed by the United States seeking an injunction against the enforcement of Arizona law, that the law providing for state enforcement of federal immigration policy was preempted).Show More and law enforcement,10 10.Compare H.R. 1280, 117th Cong. § 102 (2021) (limiting defense of qualified immunity in suits against law enforcement officers), with Iowa Code § 670.4A (2023) (reinforcing defense of qualified immunity as a matter of Iowa state law).Show More now is such a time. It is thus unsurprising that a federal circuit court was recently presented with a Supremacy Clause immunity claim in a case that evokes the broader public debate about immunity from suit for law enforcement officers.11 11.See Virginia v. Amaya, No. 1:21-cr-91, 2021 WL 4942808 (E.D. Va. Oct. 22, 2021), appeal dismissed, 2022 WL 1259877 (4th Cir. Apr. 25, 2022). The Fourth Circuit dismissed the case after a newly elected attorney general ceased pursuing the appeal. Tom Jackman, Va. Attorney General Miyares Ends Prosecution of U.S. Park Police Officers in Ghaisar Case, Wash. Post (Apr. 22, 2022, 7:51 PM), https://www.washingtonpost.com/dc-md-va/2022/04/‌22/ghaisar-case-dismissed/ [https://perma.cc/89CT-6YD2].Show More And any abstract conjecture about the doctrine’s relevance is cemented by ongoing conversations about Georgia’s potential prosecution of former President Trump for attempting to illegally influence vote counts in the aftermath of the 2020 election, and the possibility that he will invoke Supremacy Clause immunity.12 12.SeeNorman Eisen et al., Fulton County, Georgia’s Trump Investigation: An Analysis of the Reported Facts and Applicable Law 216–52 (2022).Show More That prosecution, were it to occur, would also provide the most likely avenue for Supremacy Clause immunity to finally reappear in the Supreme Court.

This Note approaches Supremacy Clause immunity from a novel perspective. Others have compared it to qualified immunity and preemption,13 13.Waxman & Morrison, supra note 6, at 2241.Show More but no one has attempted to untangle the relationship between Supremacy Clause immunity and federal tax immunity, a doctrine based on the same clause of the Constitution and which serves the same purpose: protecting the functioning of the federal government from state obstruction. Since the seminal case McCulloch v. Maryland,14 14.17 U.S. (4 Wheat.) 316, 395 (1819).Show More the Court has spoken relatively frequently about federal tax immunity,15 15.See, e.g., Graves v. New Yorkex rel. O’Keefe, 306 U.S. 466, 477 (1939) (stating that federal immunity from state taxation extends to corporations owned and controlled by the government).Show More and the doctrine it has expounded provides helpful illumination for contemporary attempts to understand the scope of Supremacy Clause immunity. The comparison yields a surprising conclusion: viewed in light of federal tax immunity, the approach that lower courts have been taking to Supremacy Clause immunity appears decidedly anachronistic. In fact, Supremacy Clause immunity as it currently exists is entirely inconsistent with the understanding of the Supremacy Clause that underlies every related constitutional doctrine. Neagle arose at a time when the Court’s perception of its own power to override state laws was at its zenith.16 16.SeeStephen A. Gardbaum, The Nature of Preemption, 79 Cornell L. Rev. 767, 801 (1994) (characterizing the turn of the century as a “double shift in the direction of enhanced federal power” based on the Court’s overturning state laws as either preempted or unconstitutional under the Dormant Commerce Clause).Show More But in the last century, that has changed. As a result, the Court’s analysis of federal tax immunity has shifted dramatically, as has the doctrine of preemption.

These concurrent shifts demonstrate the Supreme Court’s adoption of a theory of government called “process federalism,”17 17.SeeWilliam Marshall, American Political Culture and the Failures of Process Federalism, 22 Harv. J.L. & Pub. Pol’y 139, 147–48 (1998); Ernest A. Young, Two Cheers for Process Federalism, 46 Vill. L. Rev. 1349, 1350 (2001).Show More which was proposed by Professor Herbert Wechsler in a highly influential mid-century Article.18 18.Herbert Wechsler, The Political Safeguards of Federalism: The Role of the States in the Composition and Selection of the National Government, 54 Colum. L. Rev. 543, 546 (1954).Show More Wechsler’s analysis focused on the judiciary’s role in protecting states from the federal government, for example by invalidating federal actions as infringing on the powers of the states.19 19.Id.at 558–60.Show More He argued that the judiciary’s role in this area was limited.20 20.Id. at 560.Show More In his view, if the matter were left to Congress, states’ interests would naturally be accommodated based on their role in Congress’s structure and composition.21 21.Id.at 547.Show More Other scholars later related Wechsler’s theory to doctrines that pointed in the other direction, and concluded that courts should also decline to invalidate state action as obstructing the federal government without explicit congressional direction.22 22.Laurence H. Tribe, Intergovernmental Immunities in Litigation, Taxation, and Regulation: Separation of Powers Issues in Controversies About Federalism, 89 Harv. L. Rev. 682, 695, 712–13 (1976).Show More Otherwise the judiciary is inclined to be overprotective of the federal government and deaf to states’ concerns.

Jurisprudential shifts in both federal tax immunity and preemption reveal the Supreme Court’s wholesale embrace of this state-protective spin on process federalism. In each of these areas the Court previously nullified state action on a constitutional basis whenever it perceived a conflict between federal and state interests. But it now only invalidates the state law if it perceives congressional intent to do so.23 23.See discussion infra Section III.B.Show More Supremacy Clause immunity has escaped this treatment, and as it currently stands, it remains irreconcilable with the theoretical underpinnings of other Supremacy Clause-derived doctrines. In cases where federal officers claim Supremacy Clause immunity, federal judges still routinely refuse to enforce state criminal law based only on their own perceptions of conflict between federal and state interests, and without any reference to congressional intent. The legal standard these cases apply is no longer consistent with the Supreme Court’s understanding of the Supremacy Clause generally, even if it is reasonably derived from the scarce text of the Court’s century-old Supremacy Clause immunity cases.

This Note proceeds in four parts to propose a new approach to evaluating claims of Supremacy Clause immunity. Part I charts the origin of Supremacy Clause immunity in a string of turn-of-the-century Supreme Court cases and its subsequent development in circuit courts. Part II rejects an approach to Supremacy Clause immunity that has grown in influence in more recent cases and which has engendered some scholarly support: defining Supremacy Clause immunity through analogy to qualified immunity. Part III argues that a more appropriate comparison can be made to a closely analogous doctrine, federal tax immunity, and it describes the development of that doctrine and establishes its relationship to process federalism. Finally, Part IV applies the analysis to Supremacy Clause immunity and explores some of its implications.

  1. Only partially hypothetical, one is in Idaho. See Idaho v. Horiuchi, 253 F.3d 359, 363–64 (9th Cir. 2001).
  2. U.S. Const. art. VI, cl. 2 (“This Constitution, and the Laws of the United States . . . shall be the supreme Law of the Land . . . any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.”).
  3. 135 U.S. 1, 62 (1890).
  4. See id. at 45 (“As [the former Chief Justice] was about leaving the room, . . . he succeeded in drawing a bowie-knife, when his arms were seized by a deputy marshal and others present to prevent him from using it, and they were able to wrench it from him only after a severe struggle.”).
  5. The standard that has developed in lower courts is discussed in Subsection I.B, infra.
  6. See Seth P. Waxman & Trevor W. Morrison, What Kind of Immunity? Federal Officers, State Criminal Law, and the Supremacy Clause, 112 Yale L.J. 2195, 2232 (2003) (stating that Supremacy Clause immunity tends to arise “around historical moments of significant friction between the federal government and the States”).
  7. Nick Corasaniti & Reid J. Epstein, A Voting Rights Push, as States Make Voting Harder, N.Y. Times (Jan. 11, 2022), https://www.nytimes.com/2022/01/11/us/politics/biden-voting-rights-state-laws.html [https://perma.cc/39MC-2PR7] (describing that eighteen states are passing laws containing “a host of new voting restrictions” while Democrats in Congress try to pass a bill prohibiting state laws with those very types of restrictions).
  8.  See Nancy J. Knauer, The COVID-19 Pandemic and Federalism: Who Decides?, 23 N.Y.U. J. Legis. & Pub. Pol’y 1, 8 (2020) (arguing that the current federal-state collaborative approach to pandemic response “left the federal government ill-prepared to respond to the COVID-19 pandemic because of conflicting priorities”); James G. Hodge, Jr., Federal vs. State Powers in Rush to Reopen Amid Coronavirus Pandemic, Just Sec. (Apr. 27, 2020), https://www.justsecurity.org/69880/federal-vs-state-powers-in-rush-to-reopen-amid-corona‌virus-pandemic/ [https://perma.cc/62LX-4B2G] (“[T]he novel coronavirus is exposing a deep rift in American federalism as federal and state governments vie for primacy in remedying the nation’s ills.”).
  9.  See Arizona v. United States, 567 U.S. 387, 416 (2012) (holding, in a suit filed by the United States seeking an injunction against the enforcement of Arizona law, that the law providing for state enforcement of federal immigration policy was preempted).
  10.  Compare H.R. 1280, 117th Cong. § 102 (2021) (limiting defense of qualified immunity in suits against law enforcement officers), with Iowa Code § 670.4A (2023) (reinforcing defense of qualified immunity as a matter of Iowa state law).
  11.  See Virginia v. Amaya, No. 1:21-cr-91, 2021 WL 4942808 (E.D. Va. Oct. 22, 2021), appeal dismissed, 2022 WL 1259877 (4th Cir. Apr. 25, 2022). The Fourth Circuit dismissed the case after a newly elected attorney general ceased pursuing the appeal. Tom Jackman, Va. Attorney General Miyares Ends Prosecution of U.S. Park Police Officers in Ghaisar Case, Wash. Post (Apr. 22, 2022, 7:51 PM), https://www.washingtonpost.com/dc-md-va/2022/04/‌22/ghaisar-case-dismissed/ [https://perma.cc/89CT-6YD2].
  12. See Norman Eisen et al., Fulton County, Georgia’s Trump Investigation: An Analysis of the Reported Facts and Applicable Law 216–52 (2022).
  13. Waxman & Morrison, supra note 6, at 2241.
  14. 17 U.S. (4 Wheat.) 316, 395 (1819).
  15. See, e.g., Graves v. New York ex rel. O’Keefe, 306 U.S. 466, 477 (1939) (stating that federal immunity from state taxation extends to corporations owned and controlled by the government).
  16. See Stephen A. Gardbaum, The Nature of Preemption, 79 Cornell L. Rev. 767, 801 (1994) (characterizing the turn of the century as a “double shift in the direction of enhanced federal power” based on the Court’s overturning state laws as either preempted or unconstitutional under the Dormant Commerce Clause).
  17. See William Marshall, American Political Culture and the Failures of Process Federalism, 22 Harv. J.L. & Pub. Pol’y
    139

    , 147–48 (1998); Ernest A. Young, Two Cheers for Process Federalism, 46 Vill. L. Rev. 1349, 1350 (2001).

  18. Herbert Wechsler, The Political Safeguards of Federalism: The Role of the States in the Composition and Selection of the National Government, 54 Colum. L. Rev. 543, 546 (1954).
  19. Id. at 558–60.
  20. Id. at 560.
  21. Id. at 547.
  22.  Laurence H. Tribe, Intergovernmental Immunities in Litigation, Taxation, and Regulation: Separation of Powers Issues in Controversies About Federalism, 89 Harv. L. Rev. 682, 695, 712–13 (1976).
  23. See discussion infra Section III.B.