Algorithms Don’t Grant Asylum
Article by: Marcus Castillo / Graphic by: Kailyn Mai
In recent years, both immigration and artificial intelligence have become some of the most contentious and widely-discussed issues in the United States, but what isn’t talked about nearly as frequently is the intersection between the two. In fact, immigration authorities have increasingly explored the use of automated decision-making systems to assist in the processing of asylum applications. These systems are often powered by artificial intelligence and promise efficiency and consistency in a procedure so overwhelming for both the applicant and the adjudicator. Despite technological advancements demonstrating potential expedition of administrative tasks, the implications of asylum determinations are fundamentally human. This legal environment demands something the algorithm cannot provide: individualized, nuanced evaluation.
Under U.S. law, asylum is granted to individuals who demonstrate a “well-founded fear of persecution” based on race, religion, nationality, membership in a particular social group, or political opinion (Immigration and Nationality Act (INA) § 208). These statutes specifically require adjudicators to carefully weigh personal testimony, corroborate evidence, and assess credibility. Additionally, courts have consistently affirmed that these assessments are inherently fact-specific and must account for the applicant's unique circumstances.
At the international level, the 1951 Refugee Convention and its 1967 protocol have both incorporated U.S. legal precedent through the INA, which requires that states provide a fair and individualized procedure for evaluation of such claims. These legal instruments are established to protect against arbitrary and general denial of asylum, and to ensure that decisions are based on substance and evidence, rather than a mechanical calculation.
Furthermore, the nature of algorithms is deeply flawed for this version of legal work. This is because algorithms are often trained on historical data, which uses patterns from previous decisions to predict outcomes. On paper, this sounds like it has the potential to standardize certain procedural steps; however, reliance on this data in asylum cases raises a multitude of concerns:
The first of these concerns is the dehumanization of applicants. Asylum determinations involve an assessment of personal experiences and anecdotes of trauma, persecution, and fear. Obviously, artificial intelligence cannot experience empathy or understand the context of these intangible emotions; this is why human judgment is central to interpreting credibility and evaluating humanitarian factors.
The second concern is the reinforcement of bias. Data centers train artificial intelligence decision-making systems on information that reflects past decisions. If a prior determination includes bias (whether conscious or systemic), it will be reinforced and perpetuated by the algorithm. This is contrary to the INA’s and the Refugee Convention’s emphasis on individualization and justice that I mentioned earlier.
The final concern is due process. The Fifth Amendment of the U.S. Constitution guarantees procedural due process to all individuals in removal proceedings. Courts have repeatedly specified that asylum applicants are entitled to such a meaningful opportunity to present their evidence and be considered. However, automated systems may lack legal transparency and ethical accountability, making it difficult, if not impossible, for applicants to challenge any error or understand the basis of a denial.
The argument I present today has been affirmed in U.S. federal courts. In Matter of Mogharrabi, 19 I&N Dec. 439 (BIA 1987), the Board of Immigration Appeals highlighted the necessity of individualized credibility determinations. This is also similar to Negusie v. Holder, 555 U.S. 511 (2009), where the Supreme Court emphasized that statutory safeguards cannot be circumvented by mechanical processes.
Although no court has directly ruled on AI in asylum adjudications yet, I imagine this issue becoming increasingly crucial as artificial intelligence evolves, and frustration with the immigration system continues to intensify. Efficiency and innovation are important, but when it comes to asylum, the law makes it painfully clear that each case must be personalized to demand careful consideration. Algorithms may assist in administrative tasks, but they must never fully replace a human lens. In our country’s continuous pursuit of a fair and just immigration system, we must always remind ourselves, no matter how enticing or revolutionary technological achievements may be, of the importance and indispensability of human judgment.