A time-saving tool can feel like a north star, especially in the fast-paced legal environment. But attorneys must continue to prioritize their due diligence and not lean too heavily on artificial intelligence tools even as their use increases in the practice of law. Sources need to be cited and verified correctly, otherwise lawyers risk building an argument on a false foundation. Negligent AI use in legal proceedings exposes law firms to risks they cannot afford to ignore. To safeguard law firms, leadership and attorneys must take action to guide employees on AI use before facing damage to themselves, their firm and the legal profession’s reputation.
The Cost of Not Understanding AI Tools
Many Large Language Models (LLMs) like OpenAI’s ChatGPT or Google’s Gemini are still actively developing their capabilities. One of the known deficits of many LLMs is a tendency to fabricate information. When used for research, attorneys risk their professional reputations, face legal challenges and expose clients to jeopardy if they fail to understand a tool’s capabilities and properly vet AI-generated information.
Take for example Attorney Thomas Neusom, who failed to properly verify AI-generated research used to file pleadings that included inaccurate and fabricated citations. When the Grievance Committee asked Neusom about his work, he admitted to not properly validating the citations used and failed to acknowledge the fabricated information that was included. Neusom’s improper AI usage earned him a one-year suspension by the Florida Bar.
What Law Firms Can Do
Law firms must address the growing application of AI in the legal industry and provide employees with thorough and actionable guidance on if and when they should implement AI tools and how to avoid misuse. To mitigate future mistakes, law firms should consider the following practices:
- Find the Right Program: Not all LLMs are equal. When dealing with sensitive client information, it’s crucial to understand a program’s security features. Before implementing a new tool into a law firm, leadership should thoroughly research its security functions and potential implications for the firm and clients.
- Develop a Thorough Guideline: Establish clear guidelines on where and how employees may use AI tools. Be specific about what tasks can and cannot be completed using AI, how prompts should be formatted and detailed and how law firm staff can verify information.
- Implement Safeguards: Ensure the first step after using AI is manual verification. Attorneys should view AI as a supplementary tool. They are still the experts, and they should not substitute their legal knowledge for technical automation tools. A critical eye should scan and verify any output before legal processes move further.
- Secure Insurance Coverage: A robust professional liability insurance policy can help protect firms from the financial implications of claims due to improper AI use. An insurer specializing in the legal industry can inform law firm leadership on what policies are required to protect firms from AI-related liabilities.
As AI continues to find its footing, taking a proactive approach to regulate AI use within the practice of law will ensure clients and attorneys never have to suffer the consequences of AI misuse or error. To learn more about First Indemnity’s professional liability insurance offerings, visit https://firstindemnity.net/insurance-products/professional-liability/.