Getting Started

Integrate Pythia with LangChain

Getting Started

2 min read

Integrate Pythia with LangChain
Overview
Integrating Pythia with LangChain

Overview

The LangChain framework enables the development of LLMs that generate contextually relevant responses. Pre-built prompt templates, chaining capabilities, and memory modules allow developers to create complex LLMs and retain and recall information along the pipeline.

Wisecube Python SDK allows integrating Pythia with the LangChain ecosystem to allow real-time hallucination detection.

Integrating Pythia with LangChain

1. Get the API key

Submit the API key request form to get your Wisecube API key.

2. Install Wisecube SDK

pip install wisecube

3. Install LangChain libraries

Install language processing libraries from LangChain in the Python console.

pip install langchain_core
pip install langchain_openai
pip install langchain_community

4. Install knowledge base

Install your knowledge base in the project. We use Wikidata here.

pip install wikibase-rest-api-client
pip install mediawikiapi

5. Authenticate API key

Authenticate your Wisecube API key and OpenAI API key. os and getpass modules help save the API keys securely.

import os
from getpass import getpass

API_KEY = getpass("Wisecube API Key:")
OPENAI_API_KEY = getpass("Open API Key:")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY

6. Define a function to interact Pythia with LangChain

Define a wrapper function that queries the knowledge base with the input question and asks Pythia about hallucinations in Wikidata response.

import re
from langchain_core.runnables import RunnablePassthrough
from langchain_community.tools.pubmed.tool import PubmedQueryRun
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun
from wisecube_sdk.client import WisecubeClient

def get_pubmed_and_sdk_response(question):

    wikidata = WikidataQueryRun(api_wrapper=WikidataAPIWrapper())
    wik = wikidata.run(question)
    print("Wikidata Response:", wik)

    #extract fields from a string to call ask_pythia
    description_pattern = r"Description: (.+)"
    subclass_of_pattern = r"subclass of: (.+)"
    reference = re.search(description_pattern, wik).group(1)
    response = re.search(subclass_of_pattern, wik).group(1)

    qa_client = WisecubeClient(API_KEY).client
    response_from_sdk = qa_client.ask_pythia(reference, response, question)
    print("SDK Response:", response_from_sdk)

    return {
        'result': wik,
        'response_from_sdk': response_from_sdk
    }

7. Define a LangChain pipeline

To detect hallucinations in the LLM response, create RunnablePassthrough, PromptTemplate, and ChatOpenAI instances and the LangChain chain of events.

retriever = RunnablePassthrough()
prompt = PromptTemplate.from_template(
    """Answer the question based on the following responses: {result} and {response_from_sdk}"""
)
llm = ChatOpenAI()

chain = (
    retriever
    | (lambda question: get_pubmed_and_sdk_response(question))
    | prompt
    | llm
)

question = "diabetes"
result = chain.invoke(question)

print("Final Result:", result)

Integrate Pythia with LangChain
Overview
Integrating Pythia with LangChain

© 2024 Wisecube AI

© 2024 Wisecube AI

© 2024 Wisecube AI