Man

Development & AI | Alper Akgun

Learning langchain

September, 2023

Langchain is a framework for generating applications using large language models. It includes integrations with hyper cloud providers, API wrappers for news, movie, weather information, database wrappers. It offers summarization, syntax and semantics checking, execution of shell scripts, web scraping, few-short learning prompt generation support. It integrates with all major LLM providers like OpenAI, Anthropic, Hugging Face.

Installation

pip install langchain
pip install langchain[llms] # to install modules for supported LLM providers.
pip install langchain[all] # to install all modules

export OPENAI_API_KEY="..."  # set an environment variable

Make predictions using OpenAI

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI

llm = OpenAI(openai_api_key="...") # or use an environment variable named OPENAI_API_KEY

chat_model = ChatOpenAI()
llm.predict("Hello!")
>>>

chat_model.predict("Hello!")
>>>

Using a local model to make predictions

!pip install gpt4all langhain
!wget https://huggingface.co/TheBloke/orca_mini_3B-GGML/resolve/main/orca-mini-3b.ggmlv3.q4_0.bin

from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])
llm = GPT4All(model= ("./orca-mini-3b.ggmlv3.q4_0.bin"), callbacks=[StreamingStdOutCallbackHandler()], verbose=True)

llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What are neutrinos, and how discovered them?"
llm_chain.run(question)