ATA Letter on Wisconsin SB 357 – Take Action Now

07/22/2025 3:18 PM | Meghan McCallum (Administrator)

Professional human interpreters belong in Wisconsin courts, not artificial intelligence. Read ATA’s letter to lawmakers and take action.

In response to Wisconsin Senate Bill 357, the American Translators Association (ATA) and other organizations have written letters to lawmakers warning that the use of artificial intelligence (AI) in lieu of qualified professional human interpreters in Wisconsin’s courts poses a serious threat to the fair and efficient administration of justice, which is fundamental to our judicial system, due to the inherent deficiencies of AI-powered translation and interpreting platforms. This will lead to a host of unintended negative consequences for Wisconsin’s courts. To learn more about the risks of replacing expert interpreters with AI, read ATA’s article Think AI Should Replace Interpreters? Think Again.

Email and call Wisconsin legislators now.


Read ATA’s letter: 

https://www.atanet.org/advocacy-outreach/ata-letter-on-wisconsin-sb-357-take-action-now/


RE: Opposition to SB 357

Dear Lawmaker:

On behalf of the undersigned organizations, we respectfully urge the withdrawal of SB 357, which proposes the use of artificial intelligence (AI) tools in place of qualified professional human interpreters in Wisconsin’s courts. If enacted, this bill could result in a host of unintended negative consequences for Wisconsin’s courts.

SB 357 poses a serious threat to the fair and efficient administration of justice, which is fundamental to our judicial system, due to the inherent deficiencies of AI-powered translation and interpreting platforms. While we recognize the potential of technology to assist human interpreters in specific contexts, we are extremely concerned about the consequences of a broad interpretation and application of SB 357, as it will undermine the quality, accuracy, and accountability that court interpreting services demand. In addition, it jeopardizes the rights of individuals with limited English proficiency (LEP) and compromises Wisconsin’s ability to uphold justice.

Firstly, generative AI models, including large language models (LLMs), do not think, “speak,” use human language, or understand cultural nuances and differences. They transcribe and generate text according to statistical patterns, and the algorithm’s best estimation is based on information retrieved in large part from untrusted and unvalidated online sources. As we have seen in many instances, such models frequently generate false statements, known as “hallucinations.” Numerous attorneys around the country have been sanctioned in recent years for submitting briefs written with AI that include defective citations, invented precedents, and other misstatements, and these have all only dealt with English. In interpreting, inaccurate or misleading output violates defendants’ rights, distorts evidence, and endangers the integrity of judicial proceedings. Furthermore, many AI tools are programmed to produce output that aligns with users’ prompts. The result is AI-generated responses that compromise informed decision-making, spread misinformation, improperly inform court users of their rights, or inadvertently advise parties to violate court orders or break the law. A comprehensive review by the World Health Organization (WHO) determined that a leading AI interpreting tool was not even fit for informational public-facing meetings where the organization’s image or reputation are at stake, much less important matters of justice.

Secondly, AI tools may be capable of processing and generating plausible translations under limited and controlled circumstances in a handful of languages with large training datasets, such as English. The same cannot be said for languages for which there is relatively little reliable bilingual data online. For the purposes of language access in U.S. courts, these are often called “languages of lesser diffusion” (LLDs), even though they may still represent millions of speakers worldwide and tens of thousands of speakers in Wisconsin, including Hmong, Burmese, Karen, and others. AI performance in these languages is deficient, significantly increasing the risk of errors. The court’s use of AI in matters involving LLDs disproportionately impacts all who rely on interpreting services for equal access to justice, including the courts themselves.

Finally, judicial interpreters are highly trained professionals who adhere to codes of ethics, take an oath, place their name and credentials on the record, and are accountable for their work. The story of José María Rodríguez Uriarte, a father mistakenly blamed for the accidental death of his son in Dane County due to improper interpreting, is just one example of the consequences of not using qualified judicial interpreters. The National Center for State Courts (NSCS) is unequivocal in its guidance on the use of AI to replace human interpreters: “AI should not be used to replace human interpreters for real-time spoken interpretation in court proceedings due to the high risks associated with context, nuance, and potential errors. Human oversight remains critical.” Trying to replace professional judicial interpreters with AI will not solve the most pressing challenges to meaningful language access in Wisconsin’s courts, namely that the budget allocated is insufficient to cover current needs, making qualified interpreters difficult to find for certain languages. AI solutions, in addition to having many flaws, consist of both software and hardware that are extremely expensive to acquire, operate, maintain, and update. Machines also cannot be held accountable for the inevitable lack of performance. AI is frequently incorrect without ever notifying the user that it lacks needed information, which would lead to mistrials and overturned cases. Who would bear responsibility for such errors?

Wisconsin’s commitment to equal access to justice requires a robust standard for language services. SB 357, as currently drafted, undermines the prospect of fair court proceedings, may increase costs to the courts, and places the state’s interests at risk. We urge you to please oppose SB 357. Numerous standards, including ISO 18841:2018, ASTM 2089-24, and ethical guidance from the SAFE-AI task force provide useful guardrails for any future legislation. We are able to offer our assistance in drafting any proposed bills that address the use of AI tools in court with a view of mitigating risks to the interests of justice for all.

Thank you for your consideration. We stand ready to provide assistance to help ensure that all those who use Wisconsin courts receive meaningful, accurate, and professional language access.