Difficulty Of Detecting AI Content Poses Legal Challenges
Law360, April 5, 2023
In their article “Difficulty Of Detecting AI Content Poses Legal Challenges,” published on Law360.com, Managing Principal Lisa Pinheiro, Principal Jimmy Royer, and academic affiliate Christopher Bail (Duke University) explore the challenges of detecting and attributing authorship for content generated by artificial intelligence (AI), such as large language models or chatbots like OpenAI’s ChatGPT-4. The authors raise the question of how to define the legal responsibilities of developers, users, and consumers of AI-generated content in cases that provide the potential for misuse, such as in copyright, trademark, privacy, false advertising, and defamation suits, among others.
The authors discuss some of the approaches that have been proposed to combat the potential misuse of AI-generated content, including digital “watermarking” and self-detection. They then address the types of information, including subtle characteristics of AI-generated output, that may prove useful when attempting to determine whether texts are created by AI.
Ultimately, according to the article, detection of AI-generated content will require both analysis by digital forensic experts and the use of machine learning tools to identify suspicious patterns of behavior in large data sets. The authors conclude by noting, “It will require combining technical knowledge on how such tools work with behavioral knowledge on how humans work, as well as a sophisticated understanding of the co-evolution of technology and the humans who use it.”