WandbCallbackHandler#
- class langchain_community.callbacks.wandb_callback.WandbCallbackHandler(job_type: str | None = None, project: str | None = 'langchain_callback_demo', entity: str | None = None, tags: Sequence | None = None, group: str | None = None, name: str | None = None, notes: str | None = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]#
Callback Handler that logs to Weights and Biases.
- Parameters:
job_type (str) – The type of job.
project (str) – The project to log to.
entity (str) – The entity to log to.
tags (list) – The tags to log.
group (str) – The group to log to.
name (str) – The name of the run.
notes (str) – The notes to log.
visualize (bool) – Whether to visualize the run.
complexity_metrics (bool) – Whether to log complexity metrics.
stream_logs (bool) – Whether to stream callback actions to W&B
This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response using the run.log() method to Weights and Biases.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_custom_event
Ignore custom event.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
Whether to raise an error if an exception occurs.
run_inline
Whether to run the callback inline.
Methods
__init__
([job_type, project, entity, tags, ...])Initialize callback handler.
flush_tracker
([langchain_asset, reset, ...])Flush the tracker and reset the session.
on_agent_action
(action, **kwargs)Run on agent action.
on_agent_finish
(finish, **kwargs)Run when agent ends running.
on_chain_end
(outputs, **kwargs)Run when chain ends running.
on_chain_error
(error, **kwargs)Run when chain errors.
on_chain_start
(serialized, inputs, **kwargs)Run when chain starts running.
on_chat_model_start
(serialized, messages, *, ...)Run when a chat model starts running.
on_custom_event
(name, data, *, run_id[, ...])Override to define a handler for a custom event.
on_llm_end
(response, **kwargs)Run when LLM ends running.
on_llm_error
(error, **kwargs)Run when LLM errors.
on_llm_new_token
(token, **kwargs)Run when LLM generates a new token.
on_llm_start
(serialized, prompts, **kwargs)Run when LLM starts.
on_retriever_end
(documents, *, run_id[, ...])Run when Retriever ends running.
on_retriever_error
(error, *, run_id[, ...])Run when Retriever errors.
on_retriever_start
(serialized, query, *, run_id)Run when the Retriever starts running.
on_retry
(retry_state, *, run_id[, parent_run_id])Run on a retry event.
on_text
(text, **kwargs)Run when agent is ending.
on_tool_end
(output, **kwargs)Run when tool ends running.
on_tool_error
(error, **kwargs)Run when tool errors.
on_tool_start
(serialized, input_str, **kwargs)Run when tool starts running.
Reset the callback metadata.
- __init__(job_type: str | None = None, project: str | None = 'langchain_callback_demo', entity: str | None = None, tags: Sequence | None = None, group: str | None = None, name: str | None = None, notes: str | None = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False) None [source]#
Initialize callback handler.
- Parameters:
job_type (str | None)
project (str | None)
entity (str | None)
tags (Sequence | None)
group (str | None)
name (str | None)
notes (str | None)
visualize (bool)
complexity_metrics (bool)
stream_logs (bool)
- Return type:
None
- flush_tracker(langchain_asset: Any = None, reset: bool = True, finish: bool = False, job_type: str | None = None, project: str | None = None, entity: str | None = None, tags: Sequence | None = None, group: str | None = None, name: str | None = None, notes: str | None = None, visualize: bool | None = None, complexity_metrics: bool | None = None) None [source]#
Flush the tracker and reset the session.
- Parameters:
langchain_asset (Any) – The langchain asset to save.
reset (bool) – Whether to reset the session.
finish (bool) – Whether to finish the run.
job_type (str | None) – The job type.
project (str | None) – The project.
entity (str | None) – The entity.
tags (Sequence | None) – The tags.
group (str | None) – The group.
name (str | None) – The name.
notes (str | None) – The notes.
visualize (bool | None) – Whether to visualize.
complexity_metrics (bool | None) – Whether to compute complexity metrics.
Returns – None
- Return type:
None
- get_custom_callback_meta() Dict[str, Any] #
- Return type:
Dict[str, Any]
- on_agent_action(action: AgentAction, **kwargs: Any) Any [source]#
Run on agent action.
- Parameters:
action (AgentAction)
kwargs (Any)
- Return type:
Any
- on_agent_finish(finish: AgentFinish, **kwargs: Any) None [source]#
Run when agent ends running.
- Parameters:
finish (AgentFinish)
kwargs (Any)
- Return type:
None
- on_chain_end(outputs: Dict[str, Any], **kwargs: Any) None [source]#
Run when chain ends running.
- Parameters:
outputs (Dict[str, Any])
kwargs (Any)
- Return type:
None
- on_chain_error(error: BaseException, **kwargs: Any) None [source]#
Run when chain errors.
- Parameters:
error (BaseException)
kwargs (Any)
- Return type:
None
- on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) None [source]#
Run when chain starts running.
- Parameters:
serialized (Dict[str, Any])
inputs (Dict[str, Any])
kwargs (Any)
- Return type:
None
- on_chat_model_start(serialized: dict[str, Any], messages: list[list[BaseMessage]], *, run_id: UUID, parent_run_id: UUID | None = None, tags: list[str] | None = None, metadata: dict[str, Any] | None = None, **kwargs: Any) Any #
Run when a chat model starts running.
- ATTENTION: This method is called for chat models. If you’re implementing
a handler for a non-chat model, you should use on_llm_start instead.
- Parameters:
serialized (Dict[str, Any]) – The serialized chat model.
messages (List[List[BaseMessage]]) – The messages.
run_id (UUID) – The run ID. This is the ID of the current run.
parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]) – The tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
kwargs (Any) – Additional keyword arguments.
- Return type:
Any
- on_custom_event(name: str, data: Any, *, run_id: UUID, tags: list[str] | None = None, metadata: dict[str, Any] | None = None, **kwargs: Any) Any #
Override to define a handler for a custom event.
- Parameters:
name (str) – The name of the custom event.
data (Any) – The data for the custom event. Format will match the format specified by the user.
run_id (UUID) – The ID of the run.
tags (list[str] | None) – The tags associated with the custom event (includes inherited tags).
metadata (dict[str, Any] | None) – The metadata associated with the custom event (includes inherited metadata).
kwargs (Any)
- Return type:
Any
Added in version 0.2.15.
- on_llm_end(response: LLMResult, **kwargs: Any) None [source]#
Run when LLM ends running.
- Parameters:
response (LLMResult)
kwargs (Any)
- Return type:
None
- on_llm_error(error: BaseException, **kwargs: Any) None [source]#
Run when LLM errors.
- Parameters:
error (BaseException)
kwargs (Any)
- Return type:
None
- on_llm_new_token(token: str, **kwargs: Any) None [source]#
Run when LLM generates a new token.
- Parameters:
token (str)
kwargs (Any)
- Return type:
None
- on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) None [source]#
Run when LLM starts.
- Parameters:
serialized (Dict[str, Any])
prompts (List[str])
kwargs (Any)
- Return type:
None
- on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: UUID | None = None, **kwargs: Any) Any #
Run when Retriever ends running.
- Parameters:
documents (Sequence[Document]) – The documents retrieved.
run_id (UUID) – The run ID. This is the ID of the current run.
parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.
kwargs (Any) – Additional keyword arguments.
- Return type:
Any
- on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: UUID | None = None, **kwargs: Any) Any #
Run when Retriever errors.
- Parameters:
error (BaseException) – The error that occurred.
run_id (UUID) – The run ID. This is the ID of the current run.
parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.
kwargs (Any) – Additional keyword arguments.
- Return type:
Any
- on_retriever_start(serialized: dict[str, Any], query: str, *, run_id: UUID, parent_run_id: UUID | None = None, tags: list[str] | None = None, metadata: dict[str, Any] | None = None, **kwargs: Any) Any #
Run when the Retriever starts running.
- Parameters:
serialized (Dict[str, Any]) – The serialized Retriever.
query (str) – The query.
run_id (UUID) – The run ID. This is the ID of the current run.
parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.
tags (Optional[List[str]]) – The tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
kwargs (Any) – Additional keyword arguments.
- Return type:
Any
- on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: UUID | None = None, **kwargs: Any) Any #
Run on a retry event.
- Parameters:
retry_state (RetryCallState) – The retry state.
run_id (UUID) – The run ID. This is the ID of the current run.
parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.
kwargs (Any) – Additional keyword arguments.
- Return type:
Any
- on_text(text: str, **kwargs: Any) None [source]#
Run when agent is ending.
- Parameters:
text (str)
kwargs (Any)
- Return type:
None
- on_tool_end(output: Any, **kwargs: Any) None [source]#
Run when tool ends running.
- Parameters:
output (Any)
kwargs (Any)
- Return type:
None
- on_tool_error(error: BaseException, **kwargs: Any) None [source]#
Run when tool errors.
- Parameters:
error (BaseException)
kwargs (Any)
- Return type:
None
- on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) None [source]#
Run when tool starts running.
- Parameters:
serialized (Dict[str, Any])
input_str (str)
kwargs (Any)
- Return type:
None
- reset_callback_meta() None #
Reset the callback metadata.
- Return type:
None
Examples using WandbCallbackHandler