GitbookLoader#

class langchain_community.document_loaders.gitbook.GitbookLoader(
web_page: str,
load_all_paths: bool = False,
base_url: str | None = None,
content_selector: str = 'main',
continue_on_failure: bool = False,
show_progress: bool = True,
*,
sitemap_url: str | None = None,
allowed_domains: Set[str] | None = None,
)[source]#

Load GitBook data.

  1. load from either a single page, or

  2. load all (relative) paths in the sitemap, handling nested sitemap indexes.

When load_all_paths=True, the loader parses XML sitemaps and requires the lxml package to be installed (pip install lxml).

Initialize with web page and whether to load all paths.

Parameters:
  • web_page (str) โ€“ The web page to load or the starting point from where relative paths are discovered.

  • load_all_paths (bool) โ€“ If set to True, all relative paths in the navbar are loaded instead of only web_page. Requires lxml package.

  • base_url (str | None) โ€“ If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.

  • content_selector (str) โ€“ The CSS selector for the content to load. Defaults to โ€œmainโ€.

  • continue_on_failure (bool) โ€“ whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False

  • show_progress (bool) โ€“ whether to show a progress bar while loading. Default: True

  • sitemap_url (str | None) โ€“ Custom sitemap URL to use when load_all_paths is True. Defaults to โ€œ{base_url}/sitemap.xmlโ€.

  • allowed_domains (Set[str] | None) โ€“ Optional set of allowed domains to fetch from. If None (default), the loader will restrict crawling to the domain of the web_page URL to prevent potential SSRF vulnerabilities. Provide an explicit set (e.g., {โ€œexample.comโ€, โ€œdocs.example.comโ€}) to allow crawling across multiple domains. Use with caution in server environments where users might control the input URLs.

Methods

__init__(web_page[, load_all_paths, ...])

Initialize with web page and whether to load all paths.

alazy_load()

Asynchronously fetch text from GitBook page(s).

aload()

Load data into Document objects.

lazy_load()

Fetch text from one single GitBook page or recursively from sitemap.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(
web_page: str,
load_all_paths: bool = False,
base_url: str | None = None,
content_selector: str = 'main',
continue_on_failure: bool = False,
show_progress: bool = True,
*,
sitemap_url: str | None = None,
allowed_domains: Set[str] | None = None,
)[source]#

Initialize with web page and whether to load all paths.

Parameters:
  • web_page (str) โ€“ The web page to load or the starting point from where relative paths are discovered.

  • load_all_paths (bool) โ€“ If set to True, all relative paths in the navbar are loaded instead of only web_page. Requires lxml package.

  • base_url (str | None) โ€“ If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.

  • content_selector (str) โ€“ The CSS selector for the content to load. Defaults to โ€œmainโ€.

  • continue_on_failure (bool) โ€“ whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False

  • show_progress (bool) โ€“ whether to show a progress bar while loading. Default: True

  • sitemap_url (str | None) โ€“ Custom sitemap URL to use when load_all_paths is True. Defaults to โ€œ{base_url}/sitemap.xmlโ€.

  • allowed_domains (Set[str] | None) โ€“ Optional set of allowed domains to fetch from. If None (default), the loader will restrict crawling to the domain of the web_page URL to prevent potential SSRF vulnerabilities. Provide an explicit set (e.g., {โ€œexample.comโ€, โ€œdocs.example.comโ€}) to allow crawling across multiple domains. Use with caution in server environments where users might control the input URLs.

async alazy_load() โ†’ AsyncIterator[Document][source]#

Asynchronously fetch text from GitBook page(s).

Return type:

AsyncIterator[Document]

async aload() โ†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

lazy_load() โ†’ Iterator[Document][source]#

Fetch text from one single GitBook page or recursively from sitemap.

Return type:

Iterator[Document]

load() โ†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

load_and_split(
text_splitter: TextSplitter | None = None,
) โ†’ list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) โ€“ TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

list[Document]

Examples using GitbookLoader