GitbookLoader#

class langchain_community.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: str | None = None, content_selector: str = 'main', continue_on_failure: bool = False, show_progress: bool = True)[source]#

Load GitBook data.

  1. load from either a single page, or

  2. load all (relative) paths in the navbar.

Initialize with web page and whether to load all paths.

Parameters:
  • web_page (str) – The web page to load or the starting point from where relative paths are discovered.

  • load_all_paths (bool) – If set to True, all relative paths in the navbar are loaded instead of only web_page.

  • base_url (str | None) – If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.

  • content_selector (str) – The CSS selector for the content to load. Defaults to β€œmain”.

  • continue_on_failure (bool) – whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False

  • show_progress (bool) – whether to show a progress bar while loading. Default: True

Attributes

web_path

Methods

__init__(web_page[,Β load_all_paths,Β ...])

Initialize with web page and whether to load all paths.

alazy_load()

A lazy loader for Documents.

aload()

Load text from the urls in web_path async into Documents.

fetch_all(urls)

Fetch all urls concurrently with rate limiting.

lazy_load()

Fetch text from one single GitBook page.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

scrape([parser])

Scrape data from webpage and return it in BeautifulSoup format.

scrape_all(urls[,Β parser])

Fetch all urls, then return soups for all results.

__init__(web_page: str, load_all_paths: bool = False, base_url: str | None = None, content_selector: str = 'main', continue_on_failure: bool = False, show_progress: bool = True)[source]#

Initialize with web page and whether to load all paths.

Parameters:
  • web_page (str) – The web page to load or the starting point from where relative paths are discovered.

  • load_all_paths (bool) – If set to True, all relative paths in the navbar are loaded instead of only web_page.

  • base_url (str | None) – If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.

  • content_selector (str) – The CSS selector for the content to load. Defaults to β€œmain”.

  • continue_on_failure (bool) – whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False

  • show_progress (bool) – whether to show a progress bar while loading. Default: True

async alazy_load() β†’ AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

aload() β†’ List[Document]#

Load text from the urls in web_path async into Documents.

Return type:

List[Document]

async fetch_all(urls: List[str]) β†’ Any#

Fetch all urls concurrently with rate limiting.

Parameters:

urls (List[str]) –

Return type:

Any

lazy_load() β†’ Iterator[Document][source]#

Fetch text from one single GitBook page.

Return type:

Iterator[Document]

load() β†’ List[Document]#

Load data into Document objects.

Return type:

List[Document]

load_and_split(text_splitter: TextSplitter | None = None) β†’ List[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

List[Document]

scrape(parser: str | None = None) β†’ Any#

Scrape data from webpage and return it in BeautifulSoup format.

Parameters:

parser (str | None) –

Return type:

Any

scrape_all(urls: List[str], parser: str | None = None) β†’ List[Any]#

Fetch all urls, then return soups for all results.

Parameters:
  • urls (List[str]) –

  • parser (str | None) –

Return type:

List[Any]

Examples using GitbookLoader