Edwards, B. (2023). Why ChatGPT and bing chat are so good at making things up. Ars Tecnica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/, accesssed 19th April, 2024.
Frankfurt, H. (2002). Reply to cohen. In S. Buss, & L. Overton (Eds.), The contours of agency: Essays on themes from Harry Frankfurt. MIT Press.
Frankfurt, H. (2005). On Bullshit, Princeton.
Knight, W. (2023). Some glimpse AGI in ChatGPT. others call it a mirage. Wired, August 18 2023, accessed via https://www.wired.com/story/chatgpt-agi-intelligence/.
Levenstein, B. A., & Herrmann, D. A. (forthcoming). Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies, 1–27.
Levy, N. (2023). Philosophy, Bullshit, and peer review. Camridge University.
Lightman, H., et al. (2023). Let’s verify step by step. Arxiv Preprint: arXiv, 2305, 20050.
Lysandrou (2023). Lysandrou (2023). Comparative analysis of drug-GPT and ChatGPT LLMs for healthcare insights: Evaluating accuracy and relevance in patient and HCP contexts. ArXiv Preprint: arXiv, 2307, 16850v1.
Macpherson, F. (2013). The philosophy and psychology of hallucination: an introduction, in Hallucination, Macpherson and Platchias (Eds.), London: MIT Press.
Mahon, J. E. (2015). The definition of lying and deception. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/win2016/entries/lying-definition/.
Mallory, F. (2023). Fictionalism about chatbots. Ergo, 10(38), 1082–1100.
Mandelkern, M., & Linzen, T. (2023). Do language models’ Words Refer?. ArXiv Preprint: arXiv, 2308, 05576.
OpenAI (2023). GPT-4 technical report. ArXiv Preprint: arXiv, 2303, 08774v3.
Proops, I., & Sorensen, R. (2023). Destigmatizing the exegetical attribution of lies: the case of kant. Pacific Philosophical Quarterly. https://doi.org/10.1111/papq.12442.
Sarkar, A. (2023). ChatGPT 5 is on track to attain artificial general intelligence. The Statesman, April 12, 2023. Accesses via https://www.thestatesman.com/supplements/science_supplements/chatgpt-5-is-on-track-to-attain-artificial-general-intelligence-1503171366.html.
Shah, C., & Bender, E. M. (2022). Situating search. CHIIR ‘22: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval March 2022 Pages 221–232 https://doi.org/10.1145/3498366.3505816.
Weise, K., & Metz, C. (2023). When AI chatbots hallucinate. New York Times, May 9, 2023. Accessed via https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html.
Weiser, B. (2023). Here’s what happens when your lawyer uses ChatGPT. New York Times, May 23, 2023. Accessed via https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.
Zhang (2023). How language model hallucinations can snowball. ArXiv preprint: arXiv:, 2305, 13534v1.
Zhu, T., et al. (2023). Large language models for information retrieval: A survey. Arxiv Preprint: arXiv, 2308, 17107v2.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.