Find the paper here

RAG

The Retrieval-Augmented Generation (RAG) technique offers a promising approach when leveraging large language models like LLMs to build knowledge bases. Envision creating a chatbot capable of querying a collection of textbooks. A standard pre-trained LLM doesn’t inherently possess this capability. This is where RAG comes into play.

RAG works by dissecting your corpus into more manageable segments or documents. Ideally, these segments should fit within the context window of the language model in use. For user queries, they’re translated into vector embeddings, and a method like cosine similarity matches relevant documents to the query. Subsequently, the language model synthesizes a response using the actual user query and the matched documents (typically the top_k documents).

Challenged with RAG

While RAG produces impressive results, it does come with its challenges:

  • Choosing the right embedding
  • Efficiently chunking larger documents
  • Determining the optimal top_k
  • Enhancing the retrieval of pertinent documents

HYDE

Recently, I have found myself relying more and more on an approach called HyDE. This method specifically targets the last challenge, aiming to improve document retrieval. The unique proposition of HyDE is its generation of a synthetic response to the user query. This synthetic response is then converted into a vector embedding, which aids in retrieval. Such an approach proves particularly advantageous in areas where answers can be more open-ended, like qualitative research, medical transcripts, and case studies. This not only enhances retrieval accuracy but also broadens its coverage.

However, during the final LLM phase, only the identified relevant documents and the original user query are forwarded.

Quick Example

Imagine a medical database containing thousands of case studies, research papers, and patient records. A user wants to know about the side effects of a specific medication called “MediXyz.”

User Query: “What are the side effects of MediXyz?”

The system first predicts a hypothetical or synthetic response: “The common side effects of MediXyz include dizziness, nausea, and headaches.” This synthetic response is then converted into a vector embedding. Using this embedding, the system searches the medical database to find the most relevant documents. The top_k most relevant documents are then sent to the LLM, along with the original user query, producing the final answer: “MediXyz has several side effects. The most common ones include dizziness, nausea, and headaches. Some patients also reported fatigue and dry mouth. It’s essential to consult with a healthcare professional regarding any concerns.”

By first generating a synthetic response and then using its embedding for retrieval, the HyDE approach can potentially enhance the accuracy and relevance of the documents retrieved, leading to a more informative and accurate final response to the user.

Constraints of HyDE

When generating synthetic responses, a few considerations are crucial:

  • Language models trained with RLHF, such as gpt-3.5-turbo or even gpt-4, may not be optimal for synthetic response generation. In some instances, they might produce misleading outputs. Therefore, it’s advisable to rely on older models or base pre-trained models like LLAMA, FLAN, or gpt-instruct.
  • Crafting personas with precision is essential. To achieve broader and less biased retrieval, consider generating multiple personas.
  • It’s important to note that synthetic responses might increase the risk of hallucination. There’s a possibility that responses might deviate significantly from the original query, leading to matches with irrelevant documents.

In conclusion, while RAG provides a foundation for building knowledge bases using LLMs, the introduction of HyDE offers a potential solution to some of its inherent challenges, especially in the domain of document retrieval. Proper implementation and careful considerations can make this approach a game-changer in the realm of AI-driven knowledge retrieval.