BrowserlessLoader#
- class langchain_community.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: str | List[str], text_content: bool = True)[source]#
Load webpages with Browserless /content endpoint.
Initialize with API token and the URLs to scrape
Attributes
Methods
__init__
(api_token,Β urls[,Β text_content])Initialize with API token and the URLs to scrape
A lazy loader for Documents.
aload
()Load data into Document objects.
Lazy load Documents from URLs.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
- Parameters:
api_token (str) β
urls (str | List[str]) β
text_content (bool) β
- __init__(api_token: str, urls: str | List[str], text_content: bool = True)[source]#
Initialize with API token and the URLs to scrape
- Parameters:
api_token (str) β
urls (str | List[str]) β
text_content (bool) β
- async alazy_load() AsyncIterator[Document] #
A lazy loader for Documents.
- Return type:
AsyncIterator[Document]
- lazy_load() Iterator[Document] [source]#
Lazy load Documents from URLs.
- Return type:
Iterator[Document]
- load_and_split(text_splitter: TextSplitter | None = None) List[Document] #
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters:
text_splitter (Optional[TextSplitter]) β TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns:
List of Documents.
- Return type:
List[Document]
Examples using BrowserlessLoader