What if your LLM is… a graph?
What if your LLM is… a graph?
A few days ago, Petar Veličković from Google DeepMind gave one of the most interesting and thought provoking conference I've seen in a while, "Large Language Models as Graph Neural Networks". Once you start seeing LLM as graph neural network, many structural oddities suddenly falls into place.
For instance, OpenAI currently recommends to put the instructions at the top of a long prompt. Why is that so? Because due to the geometry of attention graphs, LLM are counter-intuitively biased in favors of the first tokens: they travel constinously through each generation steps, are internally repeated a lot and end up "over-squashing" the latter ones. Models then use a variety of internal metrics/transforms like softmax to moderate this bias and better ponderate distribution, but this is a late patch that cannot solve long time attention deficiencies, even more so for long context.
The most interesting aspect of the conference from an applied perspective: graph/geometric representations directly affect accuracy and robustness. As the generated sequence grow and deal with sequences of complex reasoning steps, cannot build solid expert system when attention graphs have single point of failures. Or at least, without extrapolating this information in the first place and providing more detailed accuracy metrics.
I do believe LLM explainability research is largely underexploited right now, despite being accordingly a key component of LLM devops in big labs. If anything, this is literal "prompt engineering", seeing models as nearly physical structure under stress and providing the right feedback loops to make them more reliable. | 30 comments on LinkedIn
What if your LLM is… a graph?