Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home
  • Reference
  • Legacy reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
  • Legacy reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
    • agents
    • beta
    • caches
    • callbacks
    • chat_history
    • chat_loaders
    • chat_sessions
    • document_loaders
    • documents
    • embeddings
    • example_selectors
    • exceptions
    • globals
    • graph_vectorstores
    • indexing
    • language_models
      • BaseLanguageModel
      • LangSmithParams
      • BaseChatModel
      • SimpleChatModel
      • FakeListLLM
      • FakeListLLMError
      • FakeStreamingListLLM
      • FakeChatModel
      • FakeListChatModel
      • FakeListChatModelError
      • FakeMessagesListChatModel
      • GenericFakeChatModel
      • ParrotFakeChatModel
      • BaseLLM
      • LLM
      • agenerate_from_stream
      • generate_from_stream
      • aget_prompts
      • aupdate_cache
      • create_base_retry_decorator
      • get_prompts
      • update_cache
    • load
    • memory
    • messages
    • output_parsers
    • outputs
    • prompt_values
    • prompts
    • rate_limiters
    • retrievers
    • runnables
    • stores
    • structured_query
    • sys_info
    • tools
    • tracers
    • utils
    • vectorstores
  • Langchain
  • Text Splitters
  • Community
  • Experimental

Integrations

  • AI21
  • Airbyte
  • Anthropic
  • AstraDB
  • AWS
  • Azure Dynamic Sessions
  • Box
  • Chroma
  • Cohere
  • Couchbase
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • Milvus
  • MistralAI
  • MongoDB
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Robocorp
  • Together
  • Unstructured
  • VoyageAI
  • Weaviate
  • LangChain Python API Reference
  • language_models
  • generate_from_stream

generate_from_stream#

langchain_core.language_models.chat_models.generate_from_stream(stream: Iterator[ChatGenerationChunk]) → ChatResult[source]#

Generate from a stream.

Parameters:

stream (Iterator[ChatGenerationChunk]) – Iterator of ChatGenerationChunk.

Returns:

Chat result.

Return type:

ChatResult

On this page
  • generate_from_stream()

© Copyright 2023, LangChain Inc.