OpenAIWhisperParser#
- class langchain_community.document_loaders.parsers.audio.OpenAIWhisperParser(
- api_key: str | None = None,
- *,
- chunk_duration_threshold: float = 0.1,
- base_url: str | None = None,
- language: str | None = None,
- prompt: str | None = None,
- response_format: Literal['json', 'text', 'srt', 'verbose_json', 'vtt'] | None = None,
- temperature: float | None = None,
- model: str = 'whisper-1',
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
- Parameters:
api_key (str | None) – OpenAI API key
chunk_duration_threshold (float) – Minimum duration of a chunk in seconds NOTE: According to the OpenAI API, the chunk duration should be at least 0.1 seconds. If the chunk duration is less or equal than the threshold, it will be skipped.
base_url (str | None)
language (str | None)
prompt (str | None)
response_format (Literal['json', 'text', 'srt', 'verbose_json', 'vtt'] | None)
temperature (float | None)
model (str)
Methods
__init__
([api_key, ...])lazy_parse
(blob)Lazily parse the blob.
parse
(blob)Eagerly parse the blob into a document or documents.
- __init__(
- api_key: str | None = None,
- *,
- chunk_duration_threshold: float = 0.1,
- base_url: str | None = None,
- language: str | None = None,
- prompt: str | None = None,
- response_format: Literal['json', 'text', 'srt', 'verbose_json', 'vtt'] | None = None,
- temperature: float | None = None,
- model: str = 'whisper-1',
- Parameters:
api_key (str | None)
chunk_duration_threshold (float)
base_url (str | None)
language (str | None)
prompt (str | None)
response_format (Literal['json', 'text', 'srt', 'verbose_json', 'vtt'] | None)
temperature (float | None)
model (str)
Examples using OpenAIWhisperParser