ConversationSummaryBufferMemory#
- class langchain.memory.summary_buffer.ConversationSummaryBufferMemory[source]#
Bases:
BaseChatMemory
,SummarizerMixin
Buffer with summarizer for storing conversation memory.
- param ai_prefix: str = 'AI'#
- param chat_memory: BaseChatMessageHistory [Optional]#
- param human_prefix: str = 'Human'#
- param input_key: str | None = None#
- param llm: BaseLanguageModel [Required]#
- param max_token_limit: int = 2000#
- param memory_key: str = 'history'#
- param moving_summary_buffer: str = ''#
- param output_key: str | None = None#
- param prompt: BasePromptTemplate = PromptTemplate(input_variables=['new_lines', 'summary'], template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:')#
- param return_messages: bool = False#
- param summary_message_cls: Type[BaseMessage] = <class 'langchain_core.messages.system.SystemMessage'>#
- async abuffer() str | List[BaseMessage] [source]#
Async memory buffer.
- Return type:
str | List[BaseMessage]
- async aload_memory_variables(inputs: Dict[str, Any]) Dict[str, Any] [source]#
Asynchronously return key-value pairs given the text input to the chain.
- Parameters:
inputs (Dict[str, Any]) –
- Return type:
Dict[str, Any]
- async apredict_new_summary(messages: List[BaseMessage], existing_summary: str) str #
- Parameters:
messages (List[BaseMessage]) –
existing_summary (str) –
- Return type:
str
- async aprune() None [source]#
Asynchronously prune buffer if it exceeds max token limit
- Return type:
None
- async asave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None [source]#
Asynchronously save context from this conversation to buffer.
- Parameters:
inputs (Dict[str, Any]) –
outputs (Dict[str, str]) –
- Return type:
None
- load_memory_variables(inputs: Dict[str, Any]) Dict[str, Any] [source]#
Return history buffer.
- Parameters:
inputs (Dict[str, Any]) –
- Return type:
Dict[str, Any]
- predict_new_summary(messages: List[BaseMessage], existing_summary: str) str #
- Parameters:
messages (List[BaseMessage]) –
existing_summary (str) –
- Return type:
str
- save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None [source]#
Save context from this conversation to buffer.
- Parameters:
inputs (Dict[str, Any]) –
outputs (Dict[str, str]) –
- Return type:
None
- property buffer: str | List[BaseMessage]#
String buffer of memory.