Looking into the Machine's Mind
An amazing visualization of the paths and trajectory of an LLM respones... it's math all the way across. From the creator of this: Using the chatgpt api, I ran the same completion prompt "Intelligence is " hundreds of times (setting the temperature quite high, at 1.6, for more diverse responses). Given a text, a Large Language Model assigns a probability for the word (token) to come, and it just repeats this process until a completion is… well, complete. Each text (a prompt completion or a sub-sequence) has an embedding: a position in a 1536-dimensions space (I call it semantic space, or s²₁₅₃₆). For each response there's a trajectory through s²₁₅₃₆ that corresponds to each sub-sequence of words, example: "Intelligence is " → "Intelligence is the" → "Intelligence is the ability" → "Intelligence is the ability to" → … → full completion. Because I cannot visualize a 1536-dimensions space (yet), I use a popular technique called Principal Components Analysis that tells me, for the set of points I have, what are the most important (principal) dimensions, and allows me to rotate the highly dimensional space so when I look through it, projected into only 3 dimensions, the points are scattered as much as possible. It's the best (linear)possible reduction of dimensions. In fewer words: it compresses a highly dimensional space into few dimensions while preserving as much info as it can. More or less the same as when for drawing something you choose a perspective (you rotate the object), so it provides the most relevant information. I call this new space s²₃, and it's what I visualize. What you see in the cube is a tree of trajectories that bifurcate. All start with "Intelligence is " and progress towards longer and less probable sub-sequences of responses. It's a different representation of the same tree being visualized on the right (both visualizations communicate).