Dec 5, 2024, 7:38 AM
Dec 5, 2024, 7:38 AM

Misinformation expert admits to using chatgpt for affidavit filled with Falsehoods

Provocative
Highlights
  • Jeff Hancock, a misinformation expert, confessed to using ChatGPT for citations in a legal affidavit.
  • The affidavit supports a Minnesota law regarding deepfake technology affecting elections and is currently challenged in court.
  • Hancock's use of AI raises concerns about legal document reliability and the implications of AI-generated errors.
Story

In Minnesota, misinformation expert Jeff Hancock, founder of the Stanford Social Media Lab, admitted to using OpenAI's ChatGPT when preparing an affidavit related to the state's law on deepfake technology and election influence. While he organized citations with the assistance of ChatGPT, he acknowledged that the AI introduced inaccuracies known as 'hallucinations'. Hancock maintained that these errors did not undermine the core arguments presented in his affidavit and reassured that he had reviewed and authored the substantial claims within the document, which aligned with current scholarly research in the field of AI's impact on misinformation. Hancock's affidavit supports the Minnesota law, which is being challenged in federal court by Christopher Khols and state Representative Mary Franson. Their attorneys criticized the affidavit, labeling it as 'unreliable' due to purported fabricated citations, which raised concerns about the entire legal filing's integrity. This prompted a discussion about the reliability of AI technology in legal contexts, highlighting the risks associated with AI's propensity for generating misleading information. This incident is part of a broader dialogue regarding the challenges posed by AI in various sectors, especially in legal matters. Concerns have been voiced about AI technologies, including ChatGPT, which are prone to errors that could significantly impact legal proceedings. Hancock clarified that his intention was never to mislead anyone involved in the case or the court, emphasizing that while he leveraged the AI for assistance, he did not rely on its outputs for the primary content of the affidavit. The rapid advancement of AI tools, such as ChatGPT and its successor GPT-4, has caught the attention of industry leaders and legal experts alike. Elon Musk and Sam Altman have both expressed caution regarding the potential risks tied to such technologies, particularly in sensitive scenarios like legal disputes. The episode involving Hancock and the legal challenges faced in relation to the Minnesota deepfake law illustrates the critical importance of thorough vetting and reliance on verified sources when introducing AI assistance into crucial professional contexts.

Opinions

You've reached the end