Jump to content

Page:ChatGPT is bullshit.pdf/9

From Wikisource
This page has been proofread, but needs to be validated.

perception). The very same process occurs when its outputs happen to be true.

So much for “hallucinations”. What about Edwards’ preferred term, “confabulation”? Edwards (2023) says:

In human psychology, a “confabulation” occurs when someone’s memory has a gap and the brain convincingly fills in the rest without intending to deceive others. ChatGPT does not work like the human brain, but the term “confabulation” arguably serves as a better metaphor because there’s a creative gap-filling principle at work […].

As Edwards notes, this is imperfect. Once again, the use of a human psychological term risks anthropomorphising the LLMs.

This term also suggests that there is something exceptional occurring when the LLM makes a false utterance, i.e., that in these occasions - and only these occasions - it “fills in” a gap in memory with something false. This too is misleading. Even when the ChatGPT does give us correct answers, its process is one of predicting the next token. In our view, it falsely indicates that ChatGPT is, in general, attempting to convey accurate information in its utterances. But there are strong reasons to think that it does not have beliefs that it is intending to share in general–see, for example, Levenstein and Herrmann (forthcoming). In our view, it falsely indicates that ChatGPT is, in general, attempting to convey accurate information in its utterances. Where it does track truth, it does so indirectly, and incidentally.

This is why we favour characterising ChatGPT as a bullshit machine. This terminology avoids the implications that perceiving or remembering is going on in the workings of the LLM. We can also describe it as bullshitting whenever it produces outputs. Like the human bullshitter, some of the outputs will likely be true, while others not. And as with the human bullshitter, we should be wary of relying upon any of these outputs.

Conclusion

Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.

Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.

Acknowledgements Thanks to Neil McDonnell, Bryan Pickel, Fenner Tanswell, and the University of Glasgow’s Large Language Model reading group for helpful discussion and comments.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

References

Alkaissi, H., & McFarlane, S. I. (2023, February 19). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), e35179. https://doi.org/10.7759/cureus.35179.

Bacin, S. (2021). My duties and the morality of others: Lying, truth and the good example in Fichte’s normative perfectionism. In S. Bacin, & O. Ware (Eds.), Fichte’s system of Ethics: A critical guide. Cambridge University Press.

Cassam, Q. (2019). Vices of the mind. Oxford University Press.

Cohen, G. A. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), The contours of Agency: Essays on themes from Harry Frankfurt. MIT Press

Davis, E., & Aaronson, S. (2023). Testing GPT-4 with Wolfram alpha and code interpreter plub-ins on math and science problems. Arxiv Preprint: arXiv, 2308, 05713v2.

Dennett, D.C. (1983). Intentional systems in cognitive ethology: The panglossian paradigm defended. Behavioral and Brain Sciences, 6, 343–390.

Dennett, D. C. (1987). The intentional stance. The MIT.

Dennis Whitcomb (2023). Bullshit questions. Analysis, 83(2), 299–304.

Easwaran, K. (2023). Bullshit activities. Analytic Philosophy, 00, 1–23. https://doi.org/10.1111/phib.12328.