User:Sharon Ng'ang'a/Ethics and Accountability in Legal AI: Lessons from the Mata vs. Avianca Case

From Wikipedia, the free encyclopedia

Background[edit]

The Mata vs. Avianca case, emerging from a legal dispute in New York, dispute in New York, not only sheds light on the complexities of AI implementation in the legal profession but also underscores the critical importance of ethics and accountability in AI-driven decision-making processes.[1]The case revolves around allegations of misconduct by the plaintiff's attorneys, who utilized an AI program for legal research. The AI tool, in an alarming turn of events, generated fictitious cases and fabricated legal citations, including nonexistent case law.[1] This error led to severe repercussions, with the New York federal judge sanctioning the attorneys for their negligent conduct. [2]An attorney should understand the risks and benefits of the technology used in connection with providing legal services. How these obligations apply will depend on a host of factors, including the client, the matter, the practice area, the firm size, and the tools themselves, ranging from free and readily available to custom-built, proprietary formats.[3]At the heart of this case lies the ethical responsibility of legal professionals when employing AI technologies in their practice. While AI tools hold immense potential to streamline legal research, enhance productivity, and improve decision-making, they also introduce ethical considerations that cannot be overlooked. The misuse or misinterpretation of AI-generated content can have far-reaching consequences, undermining the integrity of the legal system and jeopardizing the rights of litigants.


Disclosure and Accountability

An attorney should consider disclosure to their client that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risks of such use.[4]

In Pennsylvania, the court issued a standing Order that required each counsel (or a party representing himself or herself) to disclose whether he or she has used generative Artificial Intelligence (“AI”) in the preparation of any complaint, answer, motion, brief, or other paper filed with the Court, including in correspondence with the Court. [5]The court directed that the counsel must in a clear and plain factual statement, disclose that generative AI has been used in any way in the preparation of the filing or correspondence and certify that each and every citation to the law or the record in the filing has been verified as authentic and accurate. In Ohio, a state court prohibited attorneys from using Artificial Intelligence in preparation of any filing to be submitted in Court.[6]

The Mata case highlights the issue of accountability in AI utilization. When confronted with the discovery of the fabricated content, the attorneys' response was characterized by delay and evasion, exacerbating the gravity of their misconduct. The judge's decision to sanction the attorneys underscores the principle that legal professionals must be held accountable for their actions particularly where Artificial Intelligence is involved. [1]Just as a lawyer must make reasonable efforts to ensure that a law firm has policies to reasonably assure that the conduct of a nonlawyer assistant is compatible with the lawyer's own professional obligations, a lawyer must do the same for generative AI. Lawyers who rely on generative AI for research, drafting, communication, and client intake risk many of the same perils as those who have relied on inexperienced or overconfident nonlawyer assistants.[7]

Confidentiality and Privacy of clients[edit]

Lawyers may use generative artificial intelligence (“AI”) in the practice of law but must protect the confidentiality of client information, provide accurate and competent services, avoid improper billing practices, and comply with applicable restrictions on lawyer advertising.[8]Lawyers must ensure that the confidentiality of client information is protected when using generative AI by researching the program's policies on data retention, data sharing, and self-learning. Lawyers remain responsible for their work product and professional judgment and must develop policies and practices to verify that the use of generative AI is consistent with the lawyer's ethical obligations. Lawyers may use generative artificial intelligence (“AI”) in the practice of law but must protect the confidentiality of client information by researching the program's policies on data retention, data sharing, and self- learning.[7]

Accuracy and Reliability[edit]

One of the primary ethical obligations of legal practitioners is to ensure the accuracy and reliability of the information they present to the court. In the Mata vs. Avianca case, the attorneys failed in their duty to verify the authenticity of the AI-generated content before submitting it as evidence. [1]This lapse in judgment not only compromised the integrity of the legal proceedings but also eroded trust in the legal profession as a whole. In Hawaii ,the Federal Court gave orders that if any counsel or pro se party submits to the court any filing or submission generated by an unverified source, that attorney or pro se party must submit a declaration concurrently with that material captioned “Reliance on Unverified Source” that: (1) advises the court that counsel or the pro se party has relied on one or more unverified sources; and (2) verifies that the counsel or pro se party has confirmed that any such material is not fictitious.[9]It is important that attorneys verify the information in the documents to be submitted in court to ensure it is accurate and reliable.

Conclusion[edit]

The Mata vs. Avianca case underscores the need for robust ethical guidelines and standards governing the use of AI in the legal profession. Legal organizations and regulatory bodies must develop comprehensive frameworks that outline best practices for AI utilization, including guidelines for verifying AI-generated content, conducting due diligence, and upholding professional standards of conduct. In conclusion, the Mata vs. Avianca case serves as a sobering reminder of the ethical challenges inherent in the adoption of AI technologies in the legal profession. While AI holds tremendous potential to enhance legal practice, its deployment must be accompanied by a steadfast commitment to ethics, integrity, and accountability. By upholding these principles, legal professionals can harness the benefits of AI while safeguarding the integrity of the legal system and ensuring justice for all parties involved.


Reference List

  1. ^ a b c d Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at *2 (S.D.N.Y. June 22, 2023))
  2. ^ Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at *2 (S.D.N.Y. June 22, 2023))
  3. ^ Practice Guidance for the Use of Generative Artificial Intelligence in the Practice of Law 2023 WL 11054756, at *1 (Nov. 16, 2023)
  4. ^ Practice Guidance for the Use of Generative Artificial Intelligence in the Practice. of Law., 2023 WL 11054756, at *2 (Nov. 16, 2023)
  5. ^ Standing Order Re Artificial Intelligence, U.S. District Court for the Eastern District of Pennsylvania, https://www.paed.uscourts.gov/sites/paed/files/documents/procedures/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf (last visited 4/23/2024]).
  6. ^ United States District Court Northern District of Ohio, Boyko, Use of Generative AI, Court Order No. 4703, (2024)
  7. ^ a b Florida State Bar Association Committee on Professional Ethics, FL Eth. Op. 2024.
  8. ^ The American Bar Association [ABA], Center for Professional Responsibility,Model Rules, 2013.http://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct.html
  9. ^ General Order 23-1, In re: Use of Unverified Sources, HI R USDCT Order 23-1 (United States District Court for the District of Hawaii, effective November 14, 2023)