ChatGPT and large language models produce bullshit, not hallucinations.


False statements produced by large language models like ChatGPT should be characterized as "bullshit" rather than hallucinations or lies. This paper published by Michael Townsend Hicks, James Humphries, and Joe Slater in Springer contends that AI systems are fundamentally indifferent to the truth of their outputs, instead aiming to produce convincing human-like text. Therefore, they propose that recognizing AI outputs as bullshit rather than hallucinations can be key to understand and communicate about these technologies.

Key points:

  • Large language models like ChatGPT are designed to produce convincing text, not convey truthful information

  • ChatGPT's false statements should be called "bullshit" rather than hallucinations or lies

  • "soft bullshit" is indifference to truth, and "hard bullshit" is where the AI intends to deceive

  • The article builds on Harry Frankfurt's philosophical work on the concept of bullshit