Why Consider Knowledge Graph to Enhance Your RAG?

Angelina Yang
3 min readMay 30, 2024

Retrieval-Augmented Generation (RAG) has become a popular technique for grounding large language models and preventing them from hallucinating incorrect facts. However, basic RAG systems have some key limitations when dealing with complex questions that require reasoning over multiple pieces of information.

Before we dive in, again, as our valued readers, if you have anything on your mind that we may be able to help, fill out this form below!👇

Channel feedback form

The Limitations of Basic RAG

In a basic RAG system, external text data is split into chunks or passages which are embedded into dense vector representations. When a user asks a question, the system retrieves the most relevant vector-embedded passages based on semantic similarity to the question. These retrieved passages are then fed as context to a language model to generate a final answer.

While this allows language models to make use of external knowledge sources, there are several drawbacks:

  1. Vectorizing passages into fixed-length representations loses the explicit connections and relationships between the information contained within each passage.
  2. Key relevant details spread across multiple sentences or passages can get lost when embedding them independently into vectors.
  3. Each passage is matched independently to the question, making it difficult to connect and

--

--