Implementing a Retrieval-Augmented Generation (RAG) System with OpenAI’s API using LangChain

ScaleX Innovation
11 min readNov 9, 2023

--

Abstract

In this comprehensive guide, you’ll learn how to implement a Retrieval-Augmented Generation (RAG) system using OpenAI’s API using LangChain. This tutorial will walk you through setting up the necessary environment, installing key Python libraries, and authenticating with the OpenAI API. The guide will take you through data preparation, vectorization for efficient search, and the nuances of crafting an AI conversation flow using LangChain. This RAG system enables your chatbot to deliver precise, context-aware responses, utilizing a vast knowledge base, such as detailed information about ScaleX Innovation.

Synergizing Knowledge: Powering AI Conversations with RAG Systems

Learning Outcomes

1- Understanding the RAG system setup to enrich conversational AI with specialized knowledge.

2- Preparing and vectorizing data to build a powerful information retrieval system.

3- Crafting a conversational AI that leverages the RAG system for interactive and informed dialogue.

Resources

  • Link to Google Colab [Link]
  • Link to the scalexi.txt text file. [Link]

Introduction

Retrieval-augmented generation (RAG) is a significant development in conversational artificial intelligence (AI) that enhances the interaction between machines and humans. RAG synthesizes the strengths of both retrieval-based and generative models, providing a more adaptive and nuanced system for responding to user queries. This approach allows conversational AI to deliver more precise and contextually relevant responses by drawing upon a vast array of external information sources in addition to the model’s internal knowledge​​​​.

In practice, RAG operates by first formulating a user’s query, then scouring large databases to retrieve pertinent information, selecting the most relevant data, and finally generating a response that is informed by this external knowledge. This makes RAG particularly effective for question-answering systems where current and reliable facts are paramount, as it ensures the responses are not just plausible but factually accurate​​.

The objective of the tutorial will be to demonstrate how to set up an RAG system in a Google Colab environment utilizing OpenAI’s API. By following the tutorial, readers will learn to create a conversational AI that can retrieve information from structured data sources like CSV files and use that data to inform the conversations it has with users. This not only elevates the quality of the conversation by making it more informative and relevant but also showcases the potential of RAG to transform the capabilities of conversational AIs across various applications​​​​.

The Intuition Behind Retrieval-Augmented Generation (RAG)

In the landscape of conversational AI, Large Language Models (LLMs) are akin to encyclopedic repositories of general knowledge. They have an extensive breadth of information but often lack depth in specific, localized contexts, such as the intricacies of a company’s internal database or the specialized findings of a research paper.

Consider a scenario where one needs to extract precise information from a company’s proprietary database — let’s use ScaleX Innovation as an illustrative example. An LLM by itself may not hold the specificities about ScaleX Innovation’s personnel or the granular details embedded in static documents that are frequently updated and not widely disseminated.

Enter Retrieval-Augmented Generation. RAG acts as a bridge, connecting the vast but generalized knowledge of LLMs with the detailed, context-rich data contained within discrete sources like text files or a compendium of ScaleX Innovation’s documents. This enables the creation of a chatbot that can respond to queries with accuracy and specificity.

In this application, we leverage a text repository encompassing all pertinent information about ScaleX Innovation. The RAG system is then tasked with enabling a chatbot to assist users in navigating this repository, providing informed responses to inquiries about the company. Whether a user seeks knowledge about organizational structure, project details, or company policies, the RAG-equipped chatbot can furnish precise information, drawing directly from ScaleX Innovation’s curated knowledge base.

Through this tutorial, we will detail the process of constructing such a chatbot, ensuring it serves as a comprehensive interactive tool for information related to ScaleX Innovation.

Let’s get started

Setting Up Your Environment on Google Colab

Google Colab is a cloud-based platform that offers a free environment to run Python code, which is especially handy for data-heavy projects. It’s a favorite among data scientists for its ease of use and powerful computing resources. Plus, its compatibility with Google Drive allows for effortless file management — key for our RAG system setup.

Here’s a quick guide to get your Google Drive hooked up with Colab:

  1. Open your Colab notebook via this direct link.
  2. Mount your Google Drive by running:
from google.colab import drive
drive.mount('/content/drive/')

After execution, follow the prompt to authorize access, and you’re all set to access your Drive files right from Colab.

Installing Dependencies for Your RAG System

Before we can start querying our RAG system, we need to install a few Python libraries that are crucial for its functioning. These libraries will help us with everything from accessing the OpenAI API to handling our data and running our retrieval models.

Here’s a list of the libraries we’ll be installing:

  • langchain: A toolkit for working with language models.
  • openai: The official OpenAI Python client, for interacting with the OpenAI API.
  • tiktoken: The tiktoken package provides an efficient Byte Pair Encoding (BPE) tokenizer tailored for compatibility with OpenAI’s model architectures, allowing for seamless encoding and decoding of text for natural language processing tasks.
  • faiss-gpu: A library for efficient similarity searching and clustering of dense vectors, GPU version for speed.
  • langchain_experimental: Experimental features for the langchain library.
  • langchain[docarray]: Installs langchain with additional support for handling complex document structures.

To install these, run the following commands in your Colab notebook:

!pip install langchain
!pip install openai
!pip install tiktoken
!pip install faiss-gpu
!pip install langchain_experimental
!pip install "langchain[docarray]"

After running these commands, your environment will be equipped with all the necessary tools to build a state-of-the-art RAG system.

API Authentication: Securing Access with Your OpenAI API Key

Before we dive into the coding part, you’ll need to authenticate your access to the OpenAI API. This ensures that your requests to the API are secure and attributed to your account.

Here’s how to authenticate with your OpenAI API key:

  1. Prompt for the API Key: First, we’ll write a snippet that asks for your OpenAI API key when you run it. This is a sensitive piece of information, so never share it or hardcode it into your scripts.
import os

# Prompt the user for their OpenAI API key
api_key = input("Please enter your OpenAI API key: ")

2. Set the API Key as an Environment Variable: Next, we’ll set this key as an environment variable within our Colab session. This keeps it private and makes it accessible wherever we need it in our script.

# Set the API key as an environment variable
os.environ["OPENAI_API_KEY"] = api_key
# Optionally, check that the environment variable was set correctly
print("OPENAI_API_KEY has been set!")

With these steps, your session will be authenticated, and you’ll be ready to start building with the OpenAI API.

Building the RAG System: From Text to Conversational AI

Data Loading

Once we have our environment set up and authenticated, it’s time to build the core of our RAG system. Our goal is to create a conversational AI that can access and utilize the information from a text file — in this case, scalexi.txtwhich contains data about ScaleX Innovation.

If you’d like to follow along with the same data we’re using, you can access the scalexi.txt file [here].

Here’s how we’ll set up the RAG system:

Here’s how we’ll set up the RAG system:

  1. Load the Data: We’ll start by specifying the path to our text file and then use the TextLoader to load our data.
txt_file_path = '/content/drive/MyDrive/ScaleX Innovation/Tutorials and Notebooks/scalexi.txt'
loader = TextLoader(file_path=txt_file_path, encoding="utf-8")
data = loader.load()

2. Preprocess the Data: With CharacterTextSplitter, we'll break down our text into manageable chunks. This helps in processing and ensures that no detail is too small to be overlooked.

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
data = text_splitter.split_documents(data)

Creating the Vector Store: The Heart of RAG

At the heart of our RAG system is the Vector Store, a critical component that powers the searchability and retrieval capabilities of the chatbot. To understand how this works, let’s break it down into simpler terms.

What is a Vector Store?

A vector store is like a vast library where each book (or piece of information) is not just placed randomly on shelves but is meticulously indexed based on its content. In the digital world, this ‘indexing’ involves converting text into mathematical representations, known as vectors, which can be efficiently searched through.

How do we create it?

  1. Converting Text to Vectors with OpenAIEmbeddings:
  • OpenAIEmbeddings is a tool that takes text data and translates it into vectors. Just like translating English to French, here we translate human-readable text into a language that machines understand better — numbers.
  • These vectors capture the essence of the text, such as meaning and context, in a form that computational models can quickly process.
embeddings = OpenAIEmbeddings()

2. Storing Vectors with FAISS:

  • FAISS, which stands for Facebook AI Similarity Search, is a library developed by Facebook’s AI Research lab. It’s designed to store vectors so that we can search through them almost instantaneously.
  • Imagine you’re asking for a book in a library. Instead of checking every shelf, the librarian instantly points you to the right spot. That’s what FAISS does with information; it allows the AI to find the most relevant data points based on your query, without a time-consuming search.
vectorstore = FAISS.from_documents(data, embedding=embeddings)

By creating a Vector Store with OpenAIEmbeddings and FAISS, we’re essentially giving our chatbot a powerful search engine. This search engine can sift through vast amounts of text to find the most relevant information in response to user queries.

With the Vector Store in place, our RAG system is now equipped with a sophisticated retrieval mechanism, enabling it to provide precise and contextually appropriate responses, much like an expert digging through a well-organized file cabinet to answer your questions.

Setting up the Conversation Chain: Bringing Our RAG Chatbot to Life

The conversation chain is where our RAG system truly comes to life, transforming it from a silent repository of information into a dynamic conversational partner. This is made possible by the LangChain library, a versatile toolkit for building applications that leverage the power of language models.

What is the Conversation Chain?

Think of the conversation chain as the chatbot’s “stream of thought.” It’s the sequence of steps the system takes when you ask it a question, from understanding your query to providing a relevant response. It’s what makes the chatbot seem intelligent and responsive.

How do we build it?

  1. Initializing the Conversational Model with ChatOpenAI:
  • We use ChatOpenAI, specifying gpt-4, the latest iteration of OpenAI's models, known for its advanced understanding and generation of human-like text.
  • The temperature setting controls the creativity of the responses. A lower temperature means the responses will be more predictable and closer to the data it has been trained on, while a higher temperature allows for more creative responses.
llm = ChatOpenAI(temperature=0.7, model_name="gpt-4")

2. Setting Up Memory with ConversationBufferMemory:

  • Just like a human conversation, our chatbot needs to remember what was said previously to maintain context. ConversationBufferMemory serves this purpose, storing the history of the interaction.
  • This memory allows the chatbot to reference earlier parts of the conversation, ensuring that each response is informed by what has already been discussed.
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)

3. Linking it All Together:

  • Finally, we tie the model and the memory together into a ConversationalRetrievalChain. This chain orchestrates the process of generating a response: retrieving relevant information with the help of the Vector Store and formulating a coherent reply.
  • The retriever parameter is where the Vector Store comes into play, pulling in the information needed to answer your questions accurately.
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
memory=memory
)

LangChain’s role in this system is to provide the framework that makes our chatbot context-aware and capable of reasoning. By connecting the language model to sources of context — like the Vector Store — and enabling it to remember and reference past interactions, we create an AI that not only answers questions but understands the flow of conversation.

With these components in place, our chatbot is not just a source of information but a helpful conversationalist, ready to engage with users and provide them with accurate, contextually relevant information.

Our RAG system can retrieve specific information scalexi.txt and generate informative responses to user queries. It’s like giving your chatbot a direct line to ScaleX Innovation's knowledge base, enabling it to deliver detailed, accurate answers.

Interacting with the RAG System: Example Queries

Query 1: Understanding ScaleX Innovation

Your question: “What is ScaleX Innovation?”

query = "What is ScaleX Innovation?"
result = conversation_chain({"question": query})
answer = result["answer"]
answer

When you input this question, the system uses the conversation chain to pull relevant information to give you an overview of the company.

Expected response:

"The answer to your question is: ScaleX Innovation is a [brief description of the company]."

Query 2: Getting Contact Details

Your question: “What is the contact information?”

query = "What is the contact information?"
result = conversation_chain({"question": query})
answer = result["answer"]
answer

The system will search through the text file for contact details and present them to you.

Expected response:

"The contact information for ScaleX Innovation is as follows:

Address: Route Mahdia km 0.5, Pavillon d’Or Building, 3000 Sfax, Tunisia.

Email: info@scalexi.ai

Phone Number: +216-55-770-606

They can also be reached out on WhatsApp at the same number."

Query 3: Identifying Key Activities

Your question: “What are the main activities of ScaleX Innovation. Write it as three bullet points.”

query = "What are the main activities of ScaleX Innovation. Write is as three bullet points."
result = conversation_chain({"question": query})
answer = result["answer"]
answer

The system will distill the information into a concise list format, focusing on the main activities of the company.

Expected response:

- Specializing in Generative AI and Large Language Models, offering bespoke solutions that drive innovation and automate workflows.
- Providing cross-domain consultation and business automation with a client-centric approach.
- Implementing custom Large Language Models, AI-enabled content and data analysis across multiple industry verticals.

These examples demonstrate how the RAG system serves as an interactive tool, providing tailored responses. By querying the system, users can access a wealth of specific information, formatted and delivered in a conversational manner.

Conclusion: Empowering Conversations with AI

Throughout this journey, we’ve traversed the path from understanding the basics of Retrieval-Augmented Generation (RAG) to actually constructing a chatbot that can converse intelligently about a specific topic, ScaleX Innovation. You’ve seen how to set up your environment in Google Colab, authenticate with the OpenAI API, load and process data, create a vector store for information retrieval, and finally, bring a conversational AI to life using LangChain.

The power of RAG is now at your fingertips, and the possibilities are as vast as the data you can provide. Whether it’s company data, academic research, or any rich text source, you can tailor a RAG system to make that information conversational and accessible.

Call to Action: Join the AI Conversation

Now it’s your turn to take the reins. Experiment with your own datasets, formulate queries that are relevant to your interests or business needs and see how the RAG system responds. The more you play with it, the more you’ll understand its capabilities and limitations.

I encourage you to share your experiences, challenges, and successes. Your insights are invaluable to the community and can help others on their AI journey. If you hit a roadblock or want to dive deeper into the world of conversational AI, don’t hesitate to reach out.

You can connect with me at info@scalexi.ai, where I’m always open to discussing innovative ideas or providing guidance.

--

--

ScaleX Innovation

ScaleX Innovation excels in Generative AI & Large Language Models, driving business innovation with ethical AI solutions across diverse industries. [scalexi.ai]