Streamlining Removal, Straining Rights: AI-Driven Immigration Enforcement and Constitutional Limits
While President Donald Trump’s campaign promise to deport millions of undocumented immigrants may have bolstered his popularity among conservative voters, legal scholars have identified two crucial barriers such a plan would face: logistical and constitutional. [1] Jean Lantz Reisz, Co-Director of the USC Immigration Clinic at the Gould School of Law, highlights the logistical barriers to large-scale deportation, claiming that “the sheer number of longtime noncitizen residents and the backlog of pending immigration cases, make it difficult to remove millions of people”. [2] Roberto Suro, Director of the Sol Price Center for Social Innovation, emphasizes that such an undertaking is not practically feasible given the federal government’s insufficient number of personnel to deport millions of individuals. [3] He explains that door-to-door deportations are so labor-intensive that they become inefficient and unsustainable. [4] Reitz expands on this concern, emphasizing that a mass deportation program would require an expansion of federal resources, including an enormous increase in immigration officers, expanded detention capacity and billions of dollars in funding in order to locate, detain, and remove millions of people. [5] However, Aaron Reichlin-Melnick, policy director at the American Immigration Council, stated in an interview with the BBC, that a “massive infusion of resources…likely doesn’t exist”. [6] Scholars therefore remain dubious that such a plan would be logistically feasible. [7]
Given the operational limitations to large-scale deportations, attention has shifted towards how Artificial Intelligence (AI) and Automated Decision-Making (ADM) systems could aid the federal government in overcoming these limitations by streamlining immigration enforcement workflows. [8] While these systems promise efficiency, scholars warn that such gains may be at the expense of constitutional concerns related to due process, accountability and transparency. [9]
As the Supreme Court recognized in Yamata v. Fisher, non-citizens physically present in the United States are entitled to due process in deportation proceedings. [10] This protection, rooted in the Fifth Amendment’s Due Process Clause, requires that individuals receive notice and a hearing where the government bears the burden of proving deportability with clear and convincing evidence. [11] It is these principles that establish the constitutional baseline for assessing the use of AI in immigration enforcement.
An ADM system is a technological tool that assists in gathering, filtering or processing data that is then delivered to a human decision-maker to inform or influence their judgement in the decision making process. [12] Built on predictive analytics, these systems use statistical models trained on logistic or linear regression algorithms to detect patterns within large data sets. [13] When powered by machine learning, ADM systems train on historical data that is labeled by humans to refine predictive accuracy. [14] AI often underlies ADM systems, providing predictive modeling and language analysis that can convert case data into risk scores or classifications used to guide enforcement decisions. [15] Though The Department of Homeland Security (DHS) has employed AI-based technologies for over a decade, The Mijente Report, notes a recent and substantial expansion of these tools into sub-agencies like the U.S. Citizenship and Immigration Services (USCIS) and Immigration and Customs Enforcement (ICE). [16]
Within USCIS, several AI-based tools exemplify this escalation. The Asylum Text Analytics (ATA) system accelerates immigration processes by identifying patterns in asylum applications to detect fraud. [17] The National Security Data System (FDNS-DS Next Gen) flags high-risk applications for further review, while RelativityOne, an eDiscovery platform, employs AI to enhance the efficiency of document review. [18] George Yijun Tian, Law Professor at University of Technology in Sydney, Australia, notes that although these systems do not directly result in deportation orders, they are instrumental in flagging cases for investigation and shaping which cases receive heightened scrutiny. [19] By automating routine tasks, these tools allow USCIS officers to manage heavy caseloads and reconcentrate their human judgement on complex cases that may result in deportation. As a result, these systems reduce the personnel burden that is typically required in USCIS’ adjudications, streamlining the immigration process.
Similarly, ICE’s Enforcement and Removals Operations applies a machine learning “Hurricane Score”. The model ingests case-management details and participant actions, “based on absconding patterns” from previous cases to determine the likelihood that a non-citizen who is under ICE’s management but not in detention will abscond. [20] While DHS emphasizes that this score merely informs human decisions, its ability to triage thousands of cases at once can accelerate enforcement by prioritizing high-risk non-citizens for faster case reviews and expedited escalation when noncompliance occurs. [21] The agency also relies on the Risk Classification Assessment (RCA), the nation’s largest automated risk assessment tool that was developed as part of a broader effort to expand the detention apparatus. [22] Aggregating over 100 factors across four modules (special vulnerabilities, mandatory detention, public safety, and flight), the RCA combines database records and interviews to assign risk categories. It then issues a custody recommendation, detain or release, based on the assessed flight and public-safety risk. [23] This standardization enables the agency to handle significantly more cases with fewer personnel, shifting tasks that once required labor-intensive manual review into rapid, machine-guided screening.
By expanding the capacity and speed of immigration enforcement, AI-based tools such as the Hurricane Score and the RCA may effectively overcome the logistical barriers to large-scale deportations identified by scholars. However, critics warn that such automation intensifies Fifth Amendment procedural due-process risks, specifically inadequate notice of the government’s grounds, constrained opportunities to be heard, and erosion of neutral, reasoned adjudication. [24]
ADM scholars argue that AI-driven enforcement tools circumvent due-process protections by restricting transparency and reviewability. [25] Scholar Lou Blouin describes that proprietary algorithms operate as a “black box” where inputs and outputs are given, but the rationale for outputs are not. [26] While a code developed by a private company is protected by a trade secret privilege, cities have similarly concealed even in-house algorithms, shielding the logic behind ADMs from individuals. [27] As a result, individuals are unable to access or understand how a government agency programmed an ADM to make a decision. With such a lack of rationale, it becomes increasingly difficult to challenge the data that determines their fate. Legal scholar at Georgetown University Law Center, Estafania McCarroll, writes “individuals affected by ADM systems cannot question a decision made by the system or hold the system accountable”. [28] As courts remain reluctant to reveal trade secrets, defendants are left at a disadvantage to prosecutors. [29] Without clear notice of the reasons for adverse actions or access to the underlying evidence, individuals can not meaningfully rebut determinations at a hearing, undermining the core procedural guarantees of adequate notice and a meaningful opportunity to be heard. These transparency and reviewability concerns are exemplified by ICE’s RCA system.
Though the RCA was initially designed to simply evaluate whether detainees should be released or held, the Trump administration altered the RCA’s algorithm to automatically recommend detention in all cases. [30] McCarroll argues that the removal of discretionary release essentially transformed what once was merely an administrative tool into a digital mandate for detentions. [31] At the same time, RCA’s initial custody determination is pivotal for those not subject to mandatory detention as many detainees lack counsel, do not understand thor rights or fail to request immigration-judge review. [32] Critically, neither detainees nor their attorneys, and often even the Immigration Judge, receive the RCA’s rationale or the inputs that produced the risk label. [33] They therefore cannot know and rebut the actual grounds for detention. It is this secrecy that defeats adequate notice and a meaningful opportunity to be heard. By automating risk scoring, the RCA replaces individual discretionary custody assessments with algorithmic-driven classifications, transforming case management into a mass-processing system. [34] As a result, the RCA appears to pressure Fifth Amendment guarantees such as adequate notice of the government’s grounds and a meaningful opportunity to be heard.
McCarroll further notes that even when algorithms are revealed, their outcomes are often impossible to interpret. [35] She writes that ADM systems “produce outcomes based on Big Data, thousands of variables, and under countless combinations of different conditions that the human brain cannot comprehend.” [36] Therefore, transparency is obscured not only by the inability to access algorithmic code, but also the inherent incomprehensibility of algorithmic reasoning itself, frustrating the procedural due-process requirement that government decisions be reasoned, reviewable, and grounded in the factual record.
Beyond opacity and incomprehensibility, AI-driven enforcement raises a second due-process concern: neutrality. ADM systems rely on historical policing and immigration data that is deeply intertwined with racial bias. [37] Because their datasets reflect decades of discriminatory enforcement patterns, their integration into automated systems amplifies existing inequities. [38] Though algorithmic decision-making is acclaimed as objective, The Mijente Report asserts that “an AI tool cannot be neutral because humans insert their own bias when they code the algorithm or because the data itself contains human bias”. [39] As a result, ADM systems reproduce the same racialized assumptions that are embedded in the data they process, implicating the requirement of impartial adjudication and neutral review.
Furthermore, immigration officers frequently defer to algorithmic outputs, diminishing the role of human judgement. [40] McCarroll asserts that “the illusion that systems are ‘objective and fair’ poses the risk that officers and judges would rely heavily on them with enormous consequences for immigrants’ lives.” [41] As a result, scholars fear that independent, impartial judgement is displaced, undermining the constitutional requirement that decisions be made by a neutral decision-maker and subject to meaningful review. [42]
Ultimately, the federal government’s adoption of AI technologies and Automated Decision-Making systems reveals both an administrative innovation and a constitutional challenge. While tools like the ATA and the RCA have enabled immigration agencies to surmount the logistical constraints that once made large-scale deportation efforts impracticable, critics warn that they introduce new forms of opacity and reinforce bias that constrain the principles of due process. As courts confront the legality of automated enforcement, they will have to consider whether Fifth Amendment guarantees of due process can be preserved within an algorithmic framework or if they inevitably come at an unconstitutional price.
Sources
Hetrick, Christian. 2024. “Could Trump Actually Enforce ‘Mass Deportations’ of Migrants?” USC Price. October 17, 2024. https://priceschool.usc.edu/news/trump-mass-deportation-immigrants-deport-migrants-border-wall/.
Ibid.
Ibid.
Ibid.
Ibid.
Wendling, Bernd Debusmann Jr & Mike. 2024. “How Would Trump’s Promise of Mass Deportations of Migrants Work?” November 18, 2024. https://www.bbc.com/news/articles/ce9z0lm48ngo.
Doris Meissner, Deborah W. Meyers, Demetrios G. Papademetriou, and Michael Fix, Immigration and America’s Future: A New Chapter. Report of the Independent Task Force on Immigration and America’s Future, Spencer Abraham and Lee H. Hamilton, co-chairs (Washington, DC: Migration Policy Institute, September 2006)
Tian, George Yijun, Tim McFarland, and Sanzhuan Guo. 2025. “Automated Decision Making and Deportation: Legal Concerns and Regulation.” Griffith Law Review, March, 1–28. doi:10.1080/10383441.2025.2477946.
McCarroll, Estafania. n.d. “Weapons of Mass Deportation: Big Data and Automated Decision-Making Systems in Immigration Law.” Georgetown Immigration Law Journal 34 (705).
Yamataya v. Fisher, 189 U.S. 86 (1903)
West Education & Legal Publishing, Due Process in Immigration Proceedings (E-1 through E-XIV, Feb. 1, 2024), https://cdn.ca9.uscourts.gov/datastore/uploads/immigration/immig_west/E.pdf.
Tian, George “Automated Decision Making and Deportation”
McCarroll, Estafania. “Weapons of Mass Deportation”
Tian, George “Automated Decision Making and Deportation”
Ibid
Mao, Julie et al, ‘Automating Deportation: The Case for Abolishing ICE’s Automated Deportation Machine’ (Mijente Report, June 2024) <https://mijente.net/wp-content/uploads/2024/06/Automating-Deportation.pdf
Tian, George “Automated Decision Making and Deportation”
Ibid.
Ibid.
“United States Immigration and Customs Enforcement – AI Use Cases.” 2025. U.S. Department of Homeland Security. July 2, 2025. https://www.dhs.gov/ai/use-case-inventory/ice.
“United States Immigration and Customs Enforcement – AI Use Cases.” 2025. U.S. Department of Homeland Security. July 2, 2025. https://www.dhs.gov/ai/use-case-inventory/ice.
Koulish, Robert, and Kate Evans. 2021. “Punishing with Impunity: The Legacy of Risk Classification Assessment in Immigration Detention.” Georgetown Immigration Law Journal 36 (1): 1–71.
Ibid.
Ibid.
Ibid.
Lou Blouin, AI's mysterious ‘black box’ problem, explained, Uni. of Michigan-Dearborn News, Mar. 6, 2023, https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained.
Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 STANFORD L. REV. 1343, 1397 (2018).
McCarroll, Estafania. “Weapons of Mass Deportation”
Ibid.
Ibid.
Ibid.
Ibid.
Ibid.
Ibid.
Ibid.
Ibid.
Mao, Julie et al, ‘Automating Deportation”
Ibid.
Ibid.
McCarroll, Estafania. “Weapons of Mass Deportation”
Ibid.
Ibid.