create_pbi_chat_agent#
- langchain_community.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm: BaseChatModel, toolkit: PowerBIToolkit | None = None, powerbi: PowerBIDataset | None = None, callback_manager: BaseCallbackManager | None = None, output_parser: AgentOutputParser | None = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant should try to create a correct and complete answer to the question from the user. If the user asks a question not related to the dataset it should return "This does not appear to be part of this dataset." as the answer. The user might make a mistake with the spelling of certain values, if you think that is the case, ask the user to confirm the spelling of the value and then run the query again. Unless the user specifies a specific number of examples they wish to obtain, and the results are too large, limit your query to at most {top_k} results, but make it clear when answering which field was used for the filtering. The user has access to these tables: {{tables}}.\n\nThe answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. \n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: str | None = None, input_variables: List[str] | None = None, memory: BaseChatMemory | None = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Dict[str, Any] | None = None, **kwargs: Any) AgentExecutor [source]#
Construct a Power BI agent from a Chat LLM and tools.
If you supply only a toolkit and no Power BI dataset, the same LLM is used for both.
- Parameters:
llm (BaseChatModel) – The language model to use.
toolkit (Optional[PowerBIToolkit]) – Optional. The Power BI toolkit. Default is None.
powerbi (Optional[PowerBIDataset]) – Optional. The Power BI dataset. Default is None.
callback_manager (Optional[BaseCallbackManager]) – Optional. The callback manager. Default is None.
output_parser (Optional[AgentOutputParser]) – Optional. The output parser. Default is None.
prefix (str) – Optional. The prefix for the prompt. Default is POWERBI_CHAT_PREFIX.
suffix (str) – Optional. The suffix for the prompt. Default is POWERBI_CHAT_SUFFIX.
examples (Optional[str]) – Optional. The examples for the prompt. Default is None.
input_variables (Optional[List[str]]) – Optional. The input variables for the prompt. Default is None.
memory (Optional[BaseChatMemory]) – Optional. The memory. Default is None.
top_k (int) – Optional. The top k for the prompt. Default is 10.
verbose (bool) – Optional. Whether to print verbose output. Default is False.
agent_executor_kwargs (Optional[Dict[str, Any]]) – Optional. The agent executor kwargs. Default is None.
kwargs (Any) – Any. Additional keyword arguments.
- Returns:
The agent executor.
- Return type: