Communications may be defamatory even if readers understand that there is a substantial risk of error

Various commentators have suggested that the results of AI programs cannot be defamatory because reasonable readers would not consider the statements to be “100% reliable” or “gospel truth” or the like. Others took the more modest view that reasonable readers would at least recognize that there is a significant risk of error (especially given the disclaimers of AI programs recording such risk). And our own Orin Kerr has suggested that “no one who tries ChatGPT can think that its output is factually correct,” so I imagine he would rate the risk of error as very high.

But, as I have already noted, defamation law routinely imposes liability for the transmission of claims even when there is a clear indication that the claim may be false.

For example, “when a person repeats a defamatory allegation, even though identifying the source or indicating that it is merely a rumor, it constitutes a restatement and has the same effect as the original publication of the defamation.” When speakers identify something as a rumor, they’re implicitly saying “this could be false”—but that doesn’t let them off the hook.

Indeed, under the Restatement (Second) of Torts, “a repeater of libel or slander is liable even if he expressly states that he does not believe the statement he repeats to be true.” It is even clearer that the disclaimer is just a statement May being incorrect cannot prevent liability.

Likewise, say that you are presenting both the accusation and the answer to the accusation. By doing so, you make it clear that the charge “can [be] inaccurately.”

However, this does not prevent you from being liable for repeating the charge. (There are some narrow privileges that defamation law has developed to free people from repeating certain kinds of potentially misleading content without risk of liability, especially in contexts where such repetition is deemed particularly necessary. But these privileges are necessary precisely because otherwise presentation and accusation and response is effective.)

And this is especially so because of what OpenAI itself states in its GPT-4 technical report:

This tendency [to, among other things, produce untruthful content] it can be particularly harmful as models become more and more convincing and credible, leading users to over-rely on them. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas they are somewhat familiar with.

Couple that with OpenAI’s promotion of GPT-4’s success in reliably performing on a variety of benchmarks—bar exams, the SAT, etc.—and it seems likely that reasonable readers will perceive GPT-4 (and especially future, even more advanced versions) as generally quite reliable. They wouldn’t consider it perfectly reliable, but then again, rumors aren’t perfectly reliable, but people sometimes act on them, and repeating rumors can indeed lead to defamation lawsuits. They can certainly find it more reliable than a Ouija board, a monkey on a typewriter, a fortune teller, or various other analogies I’ve heard suggested (more on those here). And someone can be a reasonable reader even if they don’t have much understanding of how these AIs work, or even if they don’t have much experience testing AIs to see how often they make mistakes.

So yes, when an AI program generates and transmits statements about how someone has been found guilty of tax fraud, accused of harassment, and so on – and includes completely false quotes, albeit purportedly from real and prominent media outlets – there is a significant legal basis for treatment those statements as defamation, and the company AI as potentially responsible for that defamation.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *