laitimes

LangChain Chinese Quick Start Guide for the Manual

author:AI makes life better

Quick Start Guide

This tutorial gives you a quick overview of how to build an end-to-end language model application using LangChain.

Installation

First, install LangChain using the following command:

pip install langchain
# or
conda install langchain -c conda-forge
           

Environment settings

Using LangChain often requires integration with one or more model providers, data stores, APIs, and so on.

For this example, we'll be using OpenAI's API, so we'll first need to install their SDK:

pip install openai
           

Then we need to set the environment variable in the terminal.

export OPENAI_API_KEY="..."
           

Alternatively, you can do this from a Jupyter notebook (or Python script):

import os
os.environ["OPENAI_API_KEY"] = "..."
           

If you want to set the API key dynamically, you can use the openai_api_key parameter when you start the OpenAI class—for example, the API key for each user.

from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="OPENAI_API_KEY")
           

Build a language model application: LLM

Now that we have installed LangChain and set up our environment, we can start building our language model application.

LangChain provides a number of modules that you can use to build language model applications. Modules can be combined to create more complex applications, or used alone for simple applications.

LLM: Get predictions from a language model

The most basic building block of LangChain is to call LLM on some input. Let's illustrate how to do this with a simple example. To do this, let's say we're building a service that generates company names based on the company's products.

To do this, we first need to import the LLM wrapper.

from langchain.llms import OpenAI
           

Then we can initialize the wrapper with any parameter. In this example, we may want the output to be more random, so we'll initialize it with high temperature.

llm = OpenAI(temperature=0.9)
           

We can now call it based on some inputs!

text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))
           
Feetful of Fun
           

For more details on how to use LLM in LangChain, see the LLM Getting Started Guide.

Prompt Template: Manage tips for LLM

Applying for an LLM is an important first step, but it's just the beginning. Typically, when you use LLM in your application, you do not send user input directly to LLM. Instead, you might be taking user input and building a prompt that is then sent to LLM.

For example, in the previous example, the text we passed in is hardcoded to ask for the name of a company that produces colored socks. In this hypothetical service, what we want to do is just accept user input that describes what the company does, and then format the prompt with that information.

It's easy to do this with LangChain!

First, let's define the prompt template:

from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)
           

Now let's see how this works! We can call the .format method to format it.

print(prompt.format(product="colorful socks"))
           
What is a good name for a company that makes colorful socks?
           

For more details, check out the Getting Started Guide for tips.

Chains: Combine LLM and prompts in a multi-step workflow

So far, we've been using PromptTemplate and LLM primitives separately. But of course, a real application is not just a primitive, but a combination of them.

A chain in LangChain consists of links, which can be primitive chains like LLM or other chains.

The core chain type is LLMChain, which consists of PromptTemplate and LLM.

Extending the previous example, we can build an LLMChain that accepts user input, formats it with a PromptTemplate, and then passes the formatted response to LLM.

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)
           

We can now create a very simple chain that will accept user input, format the prompt with it, and then send it to LLM:

from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
           

Now we can specify only the product to run the chain!

chain.run("colorful socks")
# -> '\n\nSocktastic!'
           

Here we go! This is the first chain - the LLM chain. This is a simpler type of chain, but understanding how it works will prepare you to work with more complex chains.

For more details, check out the chain's getting started guide.

Proxy: A dynamic call chain based on user input

The chains we have seen so far run in a predetermined order.

Agents no longer do this: they use LLM to determine which actions to take and in what order. The action can be to use the tool and observe its output, or it can be returned to the user.

When used properly, proxies can be very powerful. In this tutorial, we'll show you how to easily use the proxy with the simplest, highest-level API.

In order to load the agent, you should understand the following concepts:

  • Tools: The ability to perform specific tasks. This can be: Google search, database lookup, Python REPL, other chains. The tool's interface is currently a function that expects strings as input and strings as output.
  • LLM: A language model that provides support for agents.
  • Proxy: The proxy to use. This should be a string that references the supporting proxy class. Because this notebook focuses on the simplest, highest-level APIs, only proxies that use standard support are covered. If you want to implement a custom proxy, see the documentation for the custom proxy (coming soon).

Agents: A list of supported agents and their specifications can be found here.

Tools: A list of predefined tools and their specifications can be found here.

For this example, you also need to install the SerpAPI Python package.

pip install google-search-results
           

and set the appropriate environment variables.

import os
os.environ["SERPAPI_API_KEY"] = "..."
           

Now we can get started!

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI

# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)

# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)


# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
           
> Entering new AgentExecutor chain...
 I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117

Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.

> Finished chain.
           

Memory: Add state to chains and agents

So far, all the chains and proxies we've experienced are stateless. But often, you might want a chain or agent to have some sort of "memory" concept so that it can remember information about its previous interactions. The clearest and simplest example is when designing a chatbot – you want it to remember previous messages so that it can use context for better conversations. This will be a kind of "short-term memory". On a more complex note, you can imagine a chain/agent remembering key information over time – this would be a kind of "long-term memory". For more specific thoughts on the latter, see this awesome paper.

LangChain offers several specially created chains specifically for this. This notebook describes how to use one of these chains (the ConversationChain) and two different types of memory.

By default, ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context of the pass. Let's take a look at using this chain (set verbose=True so we can see the hint).

from langchain import OpenAI, ConversationChain

llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)

output = conversation.predict(input="Hi there!")
print(output)
           
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi there!
AI:

> Finished chain.
' Hello! How are you today?'
           
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
           
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi there!
AI:  Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:

> Finished chain.
" That's great! What would you like to talk about?"
           

Build a language model application: Chat model

Similarly, you can use the chat model instead of LLM. A conversation model is a variation of the language model. Although chat models use language models at the bottom level, the interfaces they expose are a bit different: instead of exposing a "text input, text output" API, they expose a "chat message" as an interface for input and output.

The Chat Model API is fairly new, so we're still looking for the right abstraction.

Get message completion from the chat model

You can get chat completion by passing one or more messages to the chat model. The response will be a message. The message types currently supported by LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage accepts arbitrary role parameters. Most of the time, you'll only work with HumanMessage, AIMessage, and SystemMessage.

from langchain.chat_models import ChatOpenAI
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)

chat = ChatOpenAI(temperature=0)
           

You can do this by passing a message.

chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
           

You can also pass in multiple messages for OpenAI's gpt-3.5-turbo and gpt-4 models.

messages = [
    SystemMessage(content="You are a helpful assistant that translates English to French."),
    HumanMessage(content="I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
           

You can go a step further and use GenRates to complete multiple sets of messages. This returns LLMResult with an additional message parameter:

batch_messages = [
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love programming.")
    ],
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love artificial intelligence.")
    ],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
           

You can recover things like token usage from this LLMResult:

result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}
           

Chat prompt template

Similar to LLM, you can do this by using the MessagePromptTemplate. You can build one from one or more MessagePromptTemplates in ChatPromptTemplate. You can use ChatPromptTemplate's format_prompt – this returns a PromptValue that you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.

For convenience, a method is exposed on from_template template. If you were to use this template, it would look like this:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

chat = ChatOpenAI(temperature=0)

template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
           

Chain with chat mockup

What LLMChain discussed in the previous section can also be used with the chat model:

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

chat = ChatOpenAI(temperature=0)

template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
           

Agents with chat models

Agents can also be used with the chat model, and you can initialize a AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI

# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)

# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)


# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
           
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
  "action": "Search",
  "action_input": "Olivia Wilde boyfriend"
}

Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
  "action": "Search",
  "action_input": "Harry Styles age"
}

Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
  "action": "Calculator",
  "action_input": "29^0.23"
}

Observation: Answer: 2.169459462491557

Thought:I now know the final answer.
Final Answer: 2.169459462491557

> Finished chain.
'2.169459462491557'
           

Memory: Add state to chains and agents

You can use memory with chains and agents that are initialized with the chat model. The main difference between this and Memory for LLM is that we can keep them as their own unique memory objects instead of trying to compress all previous messages into a single string.

from langchain.prompts import (
    ChatPromptTemplate, 
    MessagesPlaceholder, 
    SystemMessagePromptTemplate, 
    HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory

prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
    MessagesPlaceholder(variable_name="history"),
    HumanMessagePromptTemplate.from_template("{input}")
])

llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)

conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'


conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"

conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"