Source transparency in LLM information retrieval systems | James' Coffee Blog
While designing my LLM-powered chatbot that is designed to answer questions with reference to a limited subset of my writing, I have been thinking about source attribution. The intent is to help people better evaluate the veracity, balance, and context associated with an answer returned by a model. Hallucination and itβs implications are at the forefront of my mind. I want to do what I can to ensure people can easily fact check the outputs of an LLM retrieval system.