From the Lawfare blog (hyperlink to my paper revised):
If somebody lies about you, you’ll be able to normally sue them for defamation. However what if that somebody is ChatGPT? Already in Australia, the mayor of a city exterior Melbourne has threatened to sue OpenAI as a result of ChatGPT falsely named him a responsible get together in a bribery scandal. May that occur in America? Does our libel regulation enable that? What does it even imply for a big language mannequin to behave with “malice”? Does the First Modification put any limits on the power to carry these fashions, and the businesses that make them, accountable for false statements they make? And what’s one of the best ways to cope with this downside: non-public lawsuits or authorities regulation?
On this episode of Arbiters of Fact, our sequence on the data ecosystem, Alan Rozenshtein, Affiliate Professor of Regulation on the College of Minnesota and Senior Editor at Lawfare, mentioned these questions with First Modification professional Eugene Volokh, Professor of Regulation at UCLA and the creator of a draft paper entitled “Large Libel Models.”