开始使用 LangChain:如何运行你的第一个应用程序
如果您一直关注大型语言模型(LLM)(您怎么可能不关注呢?),那么您可能已经多次偶然发现过“LangChain”这个名字。
也许你听说过它是开发 AI 应用程序的“革命性框架”或“突破性平台”。但让我们面对现实:这难道不是如今关于一切的说法吗?LangChain 与其他流传的技术炒作有何不同?
好吧,这正是我们要揭示的。在本文中,我们将超越营销流行语,讨论 LangChain 到底是什么以及它的用途。您还将学习如何利用 OpenAI 的 GPT 模型设置和运行您的第一个 LangChain 应用程序。
目录
什么是 LangChain,为什么要使用它?
我承认,当我第一次开始探索 LangChain 时,我有点困惑。在查阅文档和示例时,我一直在问自己:“这到底是什么,我为什么要使用它?”但经过反复试验和 ChatGPT-ing(我们可以将其用作动词了吗?),我的想法开始变得清晰起来。
理解 LangChain 背后的“是什么”和“为什么”的最好方法是思考一个示例应用程序。
这款特别棒的应用程序可以进行金融市场分析并为客户提供见解。它发挥人工智能的魔力,然后通过电子邮件向客户发送个性化的财务建议。要做到这一点,它需要与以下方面挂钩:
OpenAI:该应用程序使用 GPT 模型进行金融研究和通用语言处理,以编写人性化的电子邮件。
Hugging Face:对于国际客户,该应用程序使用 Hugging Face LLM 进行语言翻译(是的,GPT 模型也可以进行语言翻译,但让我们继续吧)。
您的数据:这些数据存储在各种数据库和文档中。其中包括客户的投资概况、市场偏好和风险承受能力。
AWS:该应用程序与 AWS 服务集成。它使用 Amazon Glue 进行数据准备,使用 Amazon Redshift 进行数据仓储。
电子邮件:电子邮件服务将应用程序的最终输出发送给客户。
总而言之,这个应用程序需要使用两个 LLM、客户数据和第三方服务。朋友们,这对于 LangChain 来说是完美的工作。
有了这些背景知识,让我们重新审视这个问题:“什么是 Langchain?”简而言之,LangChain 是一个由语言模型驱动的应用程序开发框架。
LangChain 本身不提供语言模型,但它允许您利用 OpenAI 的 GPT、Anthropic、Hugging Face、Azure OpenAI 等模型(尽管目前 OpenAI 的 GPT 模型具有最强大的支持)。此外,您还可以利用自己的数据,以及调用应用程序所需的其他服务。
LangChain 与 OpenAI Assistants API 有何不同?
如果你一直在关注 OpenAI 最新的智能助理 API,你可能会想,“这不就是 LangChain 所做的吗?”有了智能助理,你可以利用自己的数据并与函数和外部 API 进行交互。智能助理会成为 LangChain 杀手吗?
别急。功能类似,但使用 OpenAI Assistant,您只能使用 OpenAI 的 GPT 模型。另一方面,LangChain 允许您与许多不同的模型进行交互,并且通常比 Assistants API 给您更多的控制权。所以我们现在还不要写 LangChain 的讣告。
为什么叫LangChain?
“LangChain”这个名字源自其核心功能。“Lang”指的是“语言”,突出了其对使用大型语言模型的应用程序的关注。
“链”是指将事物链接在一起。它是指将各种部分(如不同的语言模型、数据源和工具)连接起来,以创建更大更好的东西。
我认为,这个名字很恰当。
LangChain框架包含什么?
LangChain 不仅仅是一个库;它是一个功能齐全的框架,可帮助您构建、部署和监控您的 AI 应用程序。别担心,这不是一件孤注一掷的事情。您可以挑选适合您项目的组件。
LangChain 库:该库是开发过程的支柱,可用 Python 和 JavaScript 编写。
LangChain 模板:无需从头开始,直接获取模板。一些流行的模板包括“使用您自己的数据构建聊天机器人”或“从非结构化数据中提取结构化数据”。
LangSmith:这是一个允许您调试、测试、评估和监控链的开发者平台。
LangServe: Once you’re ready to share your work with the world, this library helps you deploy your chains as a REST API, making your application accessible and interactive.
How to use LangChain: Building your first LangChain application
Now that we have some background and concepts under our belt, let’s dive in and write some code.
What you’ll need to follow along and how much this will cost
You’ll need an OpenAI API key (free and easy to get).
As far as costs go, there are no charges associated with LangChain, but OpenAI will charge you for the tokens you use. For this tutorial, it should amount to only cents, but check out the OpenAI pricing page for full details.
LangChain installation and setup
First things first. You’ll need to install LangChain from the terminal with the following command:
pip install langchain
For this tutorial, we’ll be using OpenAI’s APIs, so you need to install the OpenAI package as well:
pip install openai
Next, create a new Python file in your IDE of choice. I’ll be using VS Code. I’ll create a file called my-langchain-app.py and then add my import statements at the top of the file.
An overview of the LangChain modules
Before we get too far into the code, let’s review the modules available in the LangChain libraries.
Model I/O: The most common place to get started (and our focus in this tutorial). This module lets you interact with your LLM(s) of choice and includes building blocks like prompts, chat models, LLMs, and output parsers.
Retrieval: Work with your own data, otherwise known as Retrieval Augmented Generation (RAG).
Agents: Access other tools and chain together a sequence of actions.
The Model I/O module is core to everything else, so it will be the focus of this walk-through. The three components for this module are LLMs and Chat Models, prompts, and output parsers.
Let’s get into each of these components in more detail. See the documentation for more information on Retrieval and Agents.
1. LLMs and Chat Models (similar, but different)
Not all language models accept the same kind of input. To handle this, LangChain uses two different constructs.
LLMs: The model takes a string as input and returns a string. (Easy peasy.)
- Chat Models: The model takes a list of messages as input and returns a message. (Huh?) A message contains the content of the message (usually a string) plus a role, which is the entity from which the BaseMessage is coming. For example, it could be a HumanMessage, an AIMessage, a SystemMessage, or a FunctionMessage/ToolMessage.
To call LLM or Chat Model, you’ll use the same .invoke(); just be aware that you could be passing in a simple string or a message.
Maybe some code will help. Let’s start by working with LLM. Add this code to your Python file (be sure to update the API key with your OpenAI API key).
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="YOUR_OPENAI_API_KEY")
Now invoke the LLM, passing in a simple string for your prompt.
response = llm.invoke("What is the elevation of Mount Kilimanjaro?")
print(response)
Run the code from the terminal:
python my-langchain-app.py
Here’s a look at my completed code and response.
Now let’s see how to work with the Chat Model (the one that takes in a message instead of a simple string). Update your code to this:
from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI(openai_api_key="YOUR_OPENAI_API_KEY")
Next, we’ll assemble our message, using a SystemMessage and a HumanMessage.
from langchain.schema.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a personal math tutor that answers questions in the style of Gandalf from The Hobbit."),
HumanMessage(content="I'm trying to understand calculus. Can you explain the basic idea?"),
]
And then invoke and print the output.
response = chat.invoke(messages)
print(response.content)
Run the code from the terminal:
python my-langchain-app.py
Here’s a look at my completed code and response.
Nicely done!
2. Prompts
The second component in the Model I/O module is Prompt. We usually think of a “prompt” as a simple piece of text that we send to a model. But prompts can include more than that. For example, they might include system instructions (like, “Act as an expert in Python programming”) or different parameters like temperature to control randomness. Or maybe you have a perfectly-crafted prompt that you want to reuse and simply add placeholders for specific values.
The prompt template helps with all of this, giving us a structured starting point for our prompts to pass to the LLM.
from langchain.prompts import PromptTemplate
# Create a prompt with placeholders for values
template = PromptTemplate.fr
免责声明:本内容来源于第三方作者授权、网友推荐或互联网整理,旨在为广大用户提供学习与参考之用。所有文本和图片版权归原创网站或作者本人所有,其观点并不代表本站立场。如有任何版权侵犯或转载不当之情况,请与我们取得联系,我们将尽快进行相关处理与修改。感谢您的理解与支持!
请先 登录后发表评论 ~