AI’s legal challenges: Not so artificial, not enough intelligence

An op-ed by Sun Sentinel's Dan Sweeney with expert opinion by Tripp Scott's Paul Lopez

A year ago this month, the artificial intelligence (AI) app ChatGPT was launched, emerging as history’s fastest-growing consumer software application, and was swiftly followed by competing products. Generative AI’s rapid adoption resulted from tremendous hype and excitement — searches combining “artificial intelligence” with words like “revolutionary,” “transforming” and of course, “change,” together produce billions of results.

So who’da thunk that AI would also generate a heap of legal and regulatory challenges, especially for businesses? At the root of the problems is that, at this stage, artificial intelligence is often not as “artificial” as it seems, and it may not yet possess enough “intelligence” to steer clear of legal trouble and limitations.

First of all, AI-generated “synthetic media” is not completely “artificial.” AI “scrapes” from cyberspace the real creative output — and intellectual property — of real people, which is used to “train” the software to replicate patterns or potentially to recombine it into new works or products. The consequence of this is that it has triggered protests and, yes, lawsuits by real artists, authors, owners of photographic images, software coders and others who allege their work product has been misappropriated or mismanaged without proper licenses, permission or attribution.

In response, AI developers explain that in training apps, employment of individual works falls under “fair use” and, in any event, the sheer mass of material employed, from billions of sources, is too great for any owner to show that output is similar to or derives specifically from their work. Still, business owners need to understand that the risk of IP infringement certainly exists when employees use AI products.  Penalties for infringement can run as high as $150,000 per occurrence.

Then there are the many areas where AI is not yet “intelligent” enough to avoid legal pitfalls. Because AI cannot always separate fact from fiction in source information — and can engage in flights of fancy called “hallucinations” — it can just be plain wrong, creating more exposure for users. AI apps have flubbed basic science facts, fabricated financial results and forged false allegations, including charges that a radio show host had committed fraud and embezzlement that resulted in a defamation suit. Even we lawyers are not immune: In New York, two lawyers submitted a legal brief using AI that cited fake, AI-generated legal cases.

AI tools may not be smart enough to avoid invading the privacy of individuals or using their information, including biometric data such as face recognition, without authorization, especially in contexts such as health care or financial services. Moreover, there are examples that such invasion of privacy might be combined with information that is too “artificial.” A recent example involved a case that grabbed headlines due to the use of AI-faked nude images of New Jersey high school girls being “sexted.”

Finally, AI may lack the nuance to avoid bias in assisting and even making decisions in areas like employment, lending and insurance. This concern is so great that the White House emphasized this potential discrimination in a recent executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In light of these issues, other Washington D.C. agencies are taking note, with the federal Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission and Federal Trade Commission having issued a joint statement on the subject.

Though Florida is somewhat behind the curve on legislation and regulation of AI, numerous other states have either enacted or proposed measures addressing all of these areas as well as notifying consumers or citizens of AI use of their information. The Sunshine State will undoubtedly follow suit shortly.

So what’s a business to do? One noted law firm has so raised the alarm that it has produced a 15-page list of complex questions for companies employing AI, a level of “boiling the ocean” impractical for most companies. But a first step is simple: Consider an outright ban on non-approved business use of any AI platform by employees in any context. Then, ask platforms if they will vouch for appropriate licensing and other IP protection and include privacy safeguards, including indemnification against potential liability.

The bottom line: Until AI tools get more original and smarter, businesses should generally apply the concept of caveat utilitor — user beware, and prepare.

Previous
Previous

Christine Yates Joins International Swimming Hall of Fame Board

Next
Next

Charitable Giving Strategies: How To Be A Bigger Blessing