CaptionProcessor# class langchain_experimental.video_captioning.services.caption_service.CaptionProcessor(llm: BaseLanguageModel, verbose: bool = True, similarity_threshold: int = 80, use_unclustered_models: bool = False)[source]# Methods __init__(llm[, verbose, ...]) process(video_models[, run_manager]) Parameters: llm (BaseLanguageModel) verbose (bool) similarity_threshold (int) use_unclustered_models (bool) __init__(llm: BaseLanguageModel, verbose: bool = True, similarity_threshold: int = 80, use_unclustered_models: bool = False) → None[source]# Parameters: llm (BaseLanguageModel) verbose (bool) similarity_threshold (int) use_unclustered_models (bool) Return type: None process(video_models: List[VideoModel], run_manager: CallbackManagerForChainRun | None = None) → List[VideoModel][source]# Parameters: video_models (List[VideoModel]) run_manager (CallbackManagerForChainRun | None) Return type: List[VideoModel]