February 10, 2025 - A tech startup wants to sell and promote a new product to investors that is powered by artificial intelligence. The company's internal AI team tries to explain the technology to its in-house lawyer, but it's very complicated with all its "neural networks" and "machine learning algorithms." The lawyer is somewhat tech savvy, but still struggles to get a good sense of the AI.
One thing is clear, though. The lawyer's company is in a competitive environment, and the C-Suite is feeling AI FOMO (Fear Of Missing Out). It wants to market the product as the "World's Best AI" to attract investment dollars and to stay ahead of market competitors who are making their own lofty AI claims. Although initially hesitant about the marketing message, the lawyer ultimately defers to the company's tech experts because the lawyer doesn't have the necessary technical background to verify their claims.
As time goes on, the lawyer realizes that the company's product may not be as great as the company claimed. More troubling, the lawyer learns that the U.S. Securities and Exchange Commission ("SEC") has launched an investigation into deceptive trade practices relating to the company's AI product.
This scenario isn't new. There has been a surge in companies making exaggerated (or arguably exaggerated) claims about their AI-driven products. They may be stretching the truth about the capabilities of their technology to attract investors' attention.
This practice is called "AI washing," and it has been receiving increased regulatory scrutiny in the U.S.
In March 2024, the SEC penalized two investment firms around $400,000 for making misleading statements about their use of AI. The SEC has continued to warn companies that they must be truthful to investors when marketing products related to AI. The crackdown on AI washing likely will continue despite the deregulation push of the Trump administration.
For lawyers, AI washing raises not only legal compliance issues, but also potential ethical challenges under the rules of professional conduct. Here is what lawyers need to know to keep their companies in compliance while protecting their reputation and law licenses.
AI washing is making waves
"AI washing" is the new buzzword to describe false or misleading AI hype by companies seeking to attract investors by overstating the capabilities, innovativeness, or intelligence of an AI product or service. The SEC has warned companies about AI washing in no uncertain terms: "Don't do it," said former Chair Gary Gensler at a conference in December 2023, as quoted in The Wall Street Journal. The SEC likened AI washing to "greenwashing, opens new tab," which involves making false or misleading claims to investors about whether a company's products or practices are aligned with environmental principles.
At its core, AI washing is an unfair and deceptive trade practice. This presents obvious issues for all kinds of lawyers, but more nuanced ones for inside counsel, especially those who serve in a legal compliance role.
As one of a company's primary risk assessors, it is incumbent upon in-house counsel to monitor their company's compliance with SEC regulations. Many of these regulations deal with material disclosures. Because the inner workings of AI are relatively unknown and its capabilities are ever-changing, companies must be extra careful about the language they use to market it.
AI washing implicates in-house counsel's ethical duties
When it comes to AI tech, in-house lawyers need to stay in the loop for a more basic reason: to comply with their ethical obligations.
AI washing implicates a lawyer's ethical duties, and one of the key rules is Rule 1.13 of the Model Rules of Professional Conduct, opens new tab, which addresses the issues that arise when a lawyer's client is an organization.
Under Model Rule 1.13(b), attorneys have an ethical obligation to report information to higher authority and even to the organization's board when warranted by the circumstances. Before referring the matter up the ladder, it may be appropriate in some circumstances for the lawyer to ask the individual with whom the lawyer is dealing to reconsider the individual's proposed action or plan. A good example is when the individual has an innocent misunderstanding of the law and subsequently accepts the lawyer's advice.
What triggers the duty to counsel on a change of course or to report to higher authority under Model Rule 1.13? When the attorney for the organization "knows that an officer, employee or other person associated with the organization is engaged in action, intends to act or refuses to act in a matter related to the representation that is a violation of a legal obligation to the organization, or a violation of law that reasonably might be imputed to the organization, and that is likely to result in substantial injury to the organization …."
In such situations, "[u]nless the lawyer reasonably believes that it is not necessary in the best interest of the organization to do so, the lawyer shall refer the matter to higher authority in the organization, including, if warranted by the circumstances to the highest authority that can act on behalf of the organization as determined by applicable law." This could include the board or outside directors.
What does all this mean as a practical matter? If an in-house lawyer becomes aware that substantial injury to the company is likely to result from employee action that either violates the employee's legal obligations to the company, or that would be a legal violation that reasonably might be imputed to the company, then the lawyer must act to protect the company.
Unless the lawyer reasonably believes that it is not necessary in the best interest of the organization, the lawyer must "report up" the ladder to the company's "higher authority" and when warranted by the circumstances to the "highest authority." Comment 5 to Model Rule 1.13, opens new tab explains that a company's "highest authority" to whom a matter may be referred "ordinarily will be the board of directors or similar governing body."
Moreover, depending on the jurisdiction, Rule 1.13 also permits, but does not require, in-house lawyers to "report out" client confidential information outside the company under certain circumstances if the company did not act on the lawyer's warnings. The reporting out rules vary among jurisdictions, so in-house lawyers must be aware of which ethics rules apply to them, which is not always clear. The scenario could implicate ethics choice-of-law questions under Rule 8.5, which can be complicated.
It is, of course, highly prudent for any in-house lawyer to consult with independent ethics counsel before reporting out. Lawyers should rarely if ever act as their own ethics counsel, especially when client confidential information is at stake.
In addition to their ethical duties under Rule 1.13, in-house counsel also must comply with corresponding "up-the-ladder" reporting rules adopted by the SEC pursuant to Section 307 of the Sarbanes-Oxley Act. Once again, it is prudent for in-house attorneys to seek independent legal advice to understand their rights and duties, especially if the SEC rules are more permissive than a particular state's version of Rule 1.6 on the duty of confidentiality, which raises a preemption question.
Beyond the requirements of Rule 1.13 and the corresponding SEC rules, AI washing also implicates a lawyer's duty of competence under ABA Model Rule 1.1, opens new tab, which includes the duty of technological competence. Thus, lawyers must have a general understanding of the AI technology that their companies may be hyping to investors.
This does not mean that lawyers need to become computer coders, but they do need to be generally educated on how AI technology works (and doesn't work).
Final thoughts — old issue meets new tech
At the end of the day, AI washing is not a new type of fraud. It simply uses different buzzwords. The same legal and ethical rules apply. It remains the responsibility of lawyers to guard against the risk of AI washing and to understand their ethical duties when doing so. Lawyers who get too caught up in the AI hype may find themselves ethically entangled.
Comment