Getting Started

Integrate Pythia with RAG-based Systems

Getting Started

2 min read

Integrate Pythia with RAG-based Systems
Overview
Integrating Pythia with RAG-based Systems

Overview

Retrieval Augmented Generation (RAG) leverages the power of retrieval and generative approaches in Natural Language Processing (NLP). Together, these improve the accuracy of LLMs and generate coherent text.

Wisecube Python SDK allows for the integration of Pythia with the RAG-based systems to ensure real-time hallucination detection.

Integrating Pythia with RAG-based Systems

1. Get the API key

Submit the API key request form to get your Wisecube API key.

2. Install Wisecube SDK

pip install wisecube

3. Install LangChain libraries

Install language processing libraries from LangChain.

%pip install --upgrade --quiet  wisecube langchain langchain-community langchainhub langchain-openai langchain-chroma bs4

4. Authenticate API key

Authenticate your Wisecube API key and OpenAI API key. os and getpass modules help save the API keys securely.

import os
from getpass import getpass

API_KEY = getpass("Wisecube API Key:")
OPENAI_API_KEY = getpass("Open API Key:")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY

5. Create an OpenAI instance

Create an OpenAI instance and specify the AI model to interact with the OpenAI API.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo-0125")

6. Create a RAG system

Create your RAG system that loads a knowledge base and processes the extracted information for retrieval and generation.

# Load, chunk and index the contents of the blog.
loader = WebBaseLoader("https://my.clevelandclinic.org/health/diseases/7104-diabetes")
docs = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(docs)
vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())

# Retrieve and generate using the relevant snippets of the blog.
retriever = vectorstore.as_retriever()
prompt = hub.pull("rlm/rag-prompt")

def format_docs(docs):

    return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

7. Generate output from RAG

Define a question and extract reference and response for hallucination detection.

question = "What is diabetes?"
reference = retriever.invoke(question)
response = rag_chain.invoke(question)

8. Detect hallucinations with Pythia

Create a Wisecube client instance and call ask_pythia to detect hallucinations in RAG output. 

qa_client = WisecubeClient(API_KEY).client
response_from_sdk = qa_client.ask_pythia(reference[0].page_content, response, question)


Integrate Pythia with RAG-based Systems
Overview
Integrating Pythia with RAG-based Systems

© 2024 Wisecube AI

© 2024 Wisecube AI

© 2024 Wisecube AI