that accepts the proliferation of bullshit as innocuous, an indispensable human treasure is squandered” (2002, p343). By treating ChatGPT and similar LLMs as being in any way concerned with truth, or by speaking metaphorically as if they make mistakes or suffer “hallucinations” in pursuit of true claims, we risk exactly this acceptance of bullshit, and this squandering of meaning – so, irrespective of whether or not ChatGPT is a hard or a soft bullshitter, it does produce bullshit, and it does matter.
ChatGPT is bullshit
With this distinction in hand, we’re now in a position to consider a worry of the following sort: Is ChatGPT hard bullshitting, soft bullshitting, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft bullshitting. However, the question of whether these chatbots are hard bullshitting is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions. We canvas a few ways in which ChatGPT can be understood to have the requisite intentions in Sect. 3.2.
ChatGPT is a soft bullshitter
We are not confident that chatbots can be correctly described as having any intentions at all, and we’ll go into this in more depth in the next Sect. (3.2). But we are quite certain that ChatGPT does not intend to convey truths, and so is a soft bullshitter. We can produce an easy argument by cases for this. Either ChatGPT has intentions or it doesn’t. If ChatGPT has no intentions at all, it trivially doesn’t intend to convey truths. So, it is indifferent to the truth value of its utterances and so is a soft bullshitter.
What if ChatGPT does have intentions? In Sect. 1, we argued that ChatGPT is not designed to produce true utterances; rather, it is designed to produce text which is indistinguishable from the text produced by humans. It is aimed at being convincing rather than accurate. The basic architecture of these models reveals this: they are designed to come up with a likely continuation of a string of text. It’s reasonable to assume that one way of being a likely continuation of a text is by being true; if humans are roughly more accurate than chance, true sentences will be more likely than false ones. This might make the chatbot more accurate than chance, but it does not give the chatbot any intention to convey truths. This is similar to standard cases of human bullshitters, who don’t care whether their utterances are true; good bullshit often contains some degree of truth, that’s part of what makes it convincing. A bullshitter can be more accurate than chance while still being indifferent to the truth of their utterances. We conclude that, even if the chatbot can be described as having intentions, it is indifferent to whether its utterances are true. It does not and cannot care about the truth of its output.
Presumably ChatGPT can’t care about conveying or hiding the truth, since it can’t care about anything. So, just as a matter of conceptual necessity, it meets one of Frankfurt’s criteria for bullshit. However, this only gets us so far – a rock can’t care about anything either, and it would be patently absurd to suggest that this means rocks are bullshitters[1]. Similarly books can contain bullshit, but they are not themselves bullshitters. Unlike rocks – or even books – ChatGPT itself produces text, and looks like it performs speech acts independently of its users and designers. And while there is considerable disagreement concerning whether ChatGPT has intentions, it’s widely agreed that the sentences it produces are (typically) meaningful (see e.g. Mandelkern and Linzen 2023).
ChatGPT functions not to convey truth or falsehood but rather to convince the reader of – to use Colbert’s apt coinage – the truthiness of its statement, and ChatGPT is designed in such a way as to make attempts at bullshit efficacious (in a way that pens, dictionaries, etc., are not). So, it seems that at minimum, ChatGPT is a soft bullshitter: if we take it not to have intentions, there isn’t any attempt to mislead about the attitude towards truth, but it is nonetheless engaged in the business of outputting utterances that look as if they’re truth-apt. We conclude that ChatGPT is a soft bullshitter.
ChatGPT as hard bullshit
But is ChatGPT a hard bullshitter? A critic might object, it is simply inappropriate to think of programs like ChatGPT as hard bullshitters, because (i) they are not agents, or relatedly, (ii) they do not and cannot intend anything whatsoever.
We think this is too fast. First, whether or not ChatGPT has agency, its creators and users do. And what they produce with it, we will argue, is bullshit. Second, we will argue that, regardless of whether it has agency, it does have a function; this function gives it characteristic goals, and possibly even intentions, which align with our definition of hard bullshit.
Before moving on, we should say what we mean when we ask whether ChatGPT is an agent. For the purposes of
- ↑ Of course, rocks also can’t express propositions – but then, part of the worry here is whether ChatGPT actually is expressing propositions, or is simply a means through which agents express propositions. A further worry is that we shouldn’t even see ChatGPT as expressing propositions - perhaps there are no communicative intentions, and so we should see the outputs as meaningless. Even accepting this, we can still meaningfully talk about them as expressing propositions. This proposal - fictionalism about chatbots - has recently been discussed by Mallory (2023).