Abstract: The dazzling virtuosity of large language models (LLMs) has stirred the public imagination. Generative pretrained transformer (GPT) and similar LLMs have demonstrated an impressive array of capabilities, ranging from generating computer code and images to solving complex mathematical problems. However, even as users are dazzled by the virtuosity of large language models, a question that often crops up is whether they “know” or “understand” what they are saying, or – as argued by Bender and Koller (2020) – they are merely parroting text that they encountered on the internet during their extensive training routine. These questions are not only important in terms of the philosophy of knowledge but are likely to be crucial in assessing the eventual economic impact of LLMs. 

CLICK HERE TO DOWNLOAD

Expand
overlay image