AI is causing all kinds of problems in the legal sector
The American Bar Association believes the use of artificial intelligence in the legal sector is eroding key procedures, documentary records and evidence relied on to establish ground-level truth in the court system.
In a report released this month the ABA, which sets ethical standards for the legal profession and oversees the accreditation of roughly 400,000 attorneys in the United States, details how AI has permeated throughout the legal system. The report says lawyers increasingly use it to save time, conduct research, summarize and write key court filings, while judges use it for many of the same functions.
But as artificial intelligence – particularly generative AI tools – has been integrated throughout the legal system, it’s raising major questions for a profession that depends on accuracy and truthful representation in court.
“Faced with deepfakes offered as evidence in court or claims that legitimate evidence is a deepfake, judges are grappling with questions surrounding the authenticity, validity, and reliability of AI-generated evidence,” the ABA stated.
One of the most pressing challenges facing the court system is figuring out how to handle the emergence of lifelike, deepfake media. Fake imagery, audio and video can convincingly imitate the kinds of evidence courts have relied on for decades to determine what actually happened in a case.
With voice cloning and deepfake tools, bad actors can also create convincing media depicting judges, lawyers, witnesses or others involved in court cases in a false light, saying or doing things they never did. The ABA report cites reporting over the past year from agencies like the FBI, the Cybersecurity and Infrastructure Security Agency and organizations like the World Economic Forum warning that deepfakes pose a significant, long-term national security threat.
“The ease with which content can now be created and shared, as well as the use of algorithms that are optimized for engagement, means misinformation can spread widely and quickly,” the ABA report stated.
The findings are part of a broader report that outlines both the risks and benefits of incorporating AI technologies into the legal profession. And it comes as courts across the world have reported problems with the technology, including AI-generated legal briefs that cite hallucinated case law and other errors and questions around the ethics of presenting deepfaked testimony from dead victims in criminal proceedings.
But the ABA report also includes numerous positive sentiments from members and lawyers around the technology, citing members who have “consistently emphasized AI’s role in automating core legal functions” such as drafting documents, doing legal research and reviewing high volumes of materials, documentation or evidence.
“Many highlighted generative AI—large language models in particular—as a game-changer for accelerating routine tasks like contract analysis and litigation preparation, as well as helping firms produce first drafts, summarize large datasets, and customize communications at scale,” the report stated.
The increasing use of AI in the legal profession comes as some members of the community have reported higher workloads that have led to increased stress, burnout and attrition. A report last week from the Association of Corporate Counsel called work-related stress and long hours “a pervasive crisis” for in-house legal professionals, with legal leadership and those operating in high-demand sectors facing the highest burdens.
Officials at the highest levels of the judiciary have sounded the alarm around that the integrity of the courts are under constant threat. In his year-end report last year, Supreme Court Chief Justice John Roberts warned that bad actors, including foreign governments, are seeking to undermine trust in the legal system in the digital space, including through the kind of hacking or bot-driven disinformation campaigns that experts say have been significantly augmented by the scaling, speed and automation of large language models.
The goal for many of these parties is to “compromise the public’s confidence in our processes and outcomes” or negatively impact the public’s perception of it. Roberts mused that the judicial branch “is particularly ill-suited” to fight against these campaigns because judges mostly speak through their legal opinions and don’t generally call press conferences or issue rebuttals the way other public officials do.
Meanwhile, an AI task force at the ABA composed of “tech-savvy judges” is currently working to develop public guidance for how their profession should use generative AI and how to address “the intractable problem of deepfakes as evidence in court.” The body is also looking at how AI impacts questions around legal risk and liability.