JsonSchemaEvaluator#
- class langchain.evaluation.parsing.json_schema.JsonSchemaEvaluator(**kwargs: Any)[source]#
An evaluator that validates a JSON prediction against a JSON schema reference.
This evaluator checks if a given JSON prediction conforms to the provided JSON schema. If the prediction is valid, the score is True (no errors). Otherwise, the score is False (error occurred).
- requires_input#
Whether the evaluator requires input.
- Type:
bool
- requires_reference#
Whether the evaluator requires reference.
- Type:
bool
- evaluation_name#
The name of the evaluation.
- Type:
str
Examples
evaluator = JsonSchemaEvaluator() result = evaluator.evaluate_strings(
prediction=’{“name”: “John”, “age”: 30}’, reference={
“type”: “object”, “properties”: {
“name”: {“type”: “string”}, “age”: {“type”: “integer”}
}
}
) assert result[“score”] is not None
Initializes the JsonSchemaEvaluator.
- Parameters:
kwargs (Any) – Additional keyword arguments.
- Raises:
ImportError – If the jsonschema package is not installed.
Attributes
Returns the name of the evaluation.
Returns whether the evaluator requires input.
Returns whether the evaluator requires reference.
Methods
__init__
(**kwargs)Initializes the JsonSchemaEvaluator.
aevaluate_strings
(*, prediction[, ...])Asynchronously evaluate Chain or LLM output, based on optional input and label.
evaluate_strings
(*, prediction[, reference, ...])Evaluate Chain or LLM output, based on optional input and label.
- __init__(**kwargs: Any) None [source]#
Initializes the JsonSchemaEvaluator.
- Parameters:
kwargs (Any) – Additional keyword arguments.
- Raises:
ImportError – If the jsonschema package is not installed.
- Return type:
None
- async aevaluate_strings(*, prediction: str, reference: str | None = None, input: str | None = None, **kwargs: Any) dict #
Asynchronously evaluate Chain or LLM output, based on optional input and label.
- Parameters:
prediction (str) – The LLM or chain prediction to evaluate.
reference (Optional[str], optional) – The reference label to evaluate against.
input (Optional[str], optional) – The input to consider during evaluation.
kwargs (Any) – Additional keyword arguments, including callbacks, tags, etc.
- Returns:
The evaluation results containing the score or value.
- Return type:
dict
- evaluate_strings(*, prediction: str, reference: str | None = None, input: str | None = None, **kwargs: Any) dict #
Evaluate Chain or LLM output, based on optional input and label.
- Parameters:
prediction (str) – The LLM or chain prediction to evaluate.
reference (Optional[str], optional) – The reference label to evaluate against.
input (Optional[str], optional) – The input to consider during evaluation.
kwargs (Any) – Additional keyword arguments, including callbacks, tags, etc.
- Returns:
The evaluation results containing the score or value.
- Return type:
dict