BrowserlessLoader#

class langchain_community.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: str | List[str], text_content: bool = True)[source]#

Load webpages with Browserless /content endpoint.

Initialize with API token and the URLs to scrape

Attributes

Methods

__init__(api_token,Β urls[,Β text_content])

Initialize with API token and the URLs to scrape

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

Lazy load Documents from URLs.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

Parameters:
  • api_token (str)

  • urls (str | List[str])

  • text_content (bool)

__init__(api_token: str, urls: str | List[str], text_content: bool = True)[source]#

Initialize with API token and the URLs to scrape

Parameters:
  • api_token (str)

  • urls (str | List[str])

  • text_content (bool)

async alazy_load() β†’ AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

async aload() β†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

lazy_load() β†’ Iterator[Document][source]#

Lazy load Documents from URLs.

Return type:

Iterator[Document]

load() β†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

load_and_split(text_splitter: TextSplitter | None = None) β†’ list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

list[Document]

Examples using BrowserlessLoader