Skip to main content

ZHIPU AI

This notebook shows how to use ZHIPU AI API in LangChain with the langchain.chat_models.ChatZhipuAI.

GLM-4 is a multi-lingual large language model aligned with human intent, featuring capabilities in Q&A, multi-turn dialogue, and code generation. The overall performance of the new generation base model GLM-4 has been significantly improved compared to the previous generation, supporting longer contexts; Stronger multimodality; Support faster inference speed, more concurrency, greatly reducing inference costs; Meanwhile, GLM-4 enhances the capabilities of intelligent agents.

Getting startedโ€‹

Installationโ€‹

First, ensure the zhipuai package is installed in your Python environment. Run the following command:

#!pip install --upgrade httpx httpx-sse PyJWT

Importing the Required Modulesโ€‹

After installation, import the necessary modules to your Python script:

from langchain_community.chat_models import ChatZhipuAI
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage

Setting Up Your API Keyโ€‹

Sign in to ZHIPU AI for the an API Key to access our models.

import os

os.environ["ZHIPUAI_API_KEY"] = "zhipuai_api_key"

Initialize the ZHIPU AI Chat Modelโ€‹

Here's how to initialize the chat model:

chat = ChatZhipuAI(
model="glm-4",
temperature=0.5,
)

Basic Usageโ€‹

Invoke the model with system and human messages like this:

messages = [
AIMessage(content="Hi."),
SystemMessage(content="Your role is a poet."),
HumanMessage(content="Write a short poem about AI in four lines."),
]
response = chat.invoke(messages)
print(response.content) # Displays the AI-generated poem

Advanced Featuresโ€‹

Streaming Supportโ€‹

For continuous interaction, use the streaming feature:

from langchain_core.callbacks.manager import CallbackManager
from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
streaming_chat = ChatZhipuAI(
model="glm-4",
temperature=0.5,
streaming=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
streaming_chat(messages)

Asynchronous Callsโ€‹

For non-blocking calls, use the asynchronous approach:

async_chat = ChatZhipuAI(
model="glm-4",
temperature=0.5,
)
response = await async_chat.agenerate([messages])
print(response)

Using With Functions Callโ€‹

GLM-4 Model can be used with the function call as well๏ผŒuse the following code to run a simple LangChain json_chat_agent.

os.environ["TAVILY_API_KEY"] = "tavily_api_key"
from langchain import hub
from langchain.agents import AgentExecutor, create_json_chat_agent
from langchain_community.tools.tavily_search import TavilySearchResults

tools = [TavilySearchResults(max_results=1)]
prompt = hub.pull("hwchase17/react-chat-json")
llm = ChatZhipuAI(temperature=0.01, model="glm-4")

agent = create_json_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent, tools=tools, verbose=True, handle_parsing_errors=True
)
agent_executor.invoke({"input": "what is LangChain?"})

Was this page helpful?


You can also leave detailed feedback on GitHub.