Head-to-Tail: How Knowledgeable are Large Language Models (LLM)? A.K.A. Will LLMs Replace Knowledge Graphs?
Since the recent prosperity of Large Language Models (LLMs), there have been
interleaved discussions regarding how to reduce hallucinations from LLM
responses, how to increase the factuality of LLMs, and whether Knowledge Graphs
(KGs), which store the world knowledge in a symbolic form, will be replaced
with LLMs. In this paper, we try to answer these questions from a new angle:
How knowledgeable are LLMs?
To answer this question, we constructed Head-to-Tail, a benchmark that
consists of 18K question-answer (QA) pairs regarding head, torso, and tail
facts in terms of popularity. We designed an automated evaluation method and a
set of metrics that closely approximate the knowledge an LLM confidently
internalizes. Through a comprehensive evaluation of 14 publicly available LLMs,
we show that existing LLMs are still far from being perfect in terms of their
grasp of factual knowledge, especially for facts of torso-to-tail entities.