Source code for langchain_experimental.plan_and_execute.planners.chat_planner
importrefromlangchain.chainsimportLLMChainfromlangchain_core.language_modelsimportBaseLanguageModelfromlangchain_core.messagesimportSystemMessagefromlangchain_core.promptsimportChatPromptTemplate,HumanMessagePromptTemplatefromlangchain_experimental.plan_and_execute.planners.baseimportLLMPlannerfromlangchain_experimental.plan_and_execute.schemaimport(Plan,PlanOutputParser,Step,)SYSTEM_PROMPT=("Let's first understand the problem and devise a plan to solve the problem."" Please output the plan starting with the header 'Plan:' ""and then followed by a numbered list of steps. ""Please make the plan the minimum number of steps required ""to accurately complete the task. If the task is a question, ""the final step should almost always be 'Given the above steps taken, ""please respond to the users original question'. ""At the end of your plan, say '<END_OF_PLAN>'")
[docs]defload_chat_planner(llm:BaseLanguageModel,system_prompt:str=SYSTEM_PROMPT)->LLMPlanner:""" Load a chat planner. Args: llm: Language model. system_prompt: System prompt. Returns: LLMPlanner """prompt_template=ChatPromptTemplate.from_messages([SystemMessage(content=system_prompt),HumanMessagePromptTemplate.from_template("{input}"),])llm_chain=LLMChain(llm=llm,prompt=prompt_template)returnLLMPlanner(llm_chain=llm_chain,output_parser=PlanningOutputParser(),stop=["<END_OF_PLAN>"],)