WebBaseLoader#

class langchain_community.document_loaders.web_base.WebBaseLoader(web_path: str | Sequence[str] = '', header_template: dict | None = None, verify_ssl: bool = True, proxies: dict | None = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: str | None = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Dict[str, Any] | None = None, raise_for_status: bool = False, bs_get_text_kwargs: Dict[str, Any] | None = None, bs_kwargs: Dict[str, Any] | None = None, session: Any = None, *, show_progress: bool = True, trust_env: bool = False)[source]#

WebBaseLoader document loader integration

Setup:

Install langchain_community.

pip install -U langchain_community
Instantiate:
from langchain_community.document_loaders import WebBaseLoader

loader = WebBaseLoader(
    web_path = "https://www.espn.com/"
    # header_template = None,
    # verify_ssl = True,
    # proxies = None,
    # continue_on_failure = False,
    # autoset_encoding = True,
    # encoding = None,
    # web_paths = (),
    # requests_per_second = 2,
    # default_parser = "html.parser",
    # requests_kwargs = None,
    # raise_for_status = False,
    # bs_get_text_kwargs = None,
    # bs_kwargs = None,
    # session = None,
    # show_progress = True,
    # trust_env = False,
)
Lazy load:
docs = []
for doc in loader.lazy_load():
    docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
ESPN - Serving Sports Fans. Anytime. Anywhere.

{'source': 'https://www.espn.com/', 'title': 'ESPN - Serving Sports Fans. Anytime. Anywhere.', 'description': 'Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports.', 'language': 'en'}
Async load:
docs = []
async for doc in loader.alazy_load():
    docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
ESPN - Serving Sports Fans. Anytime. Anywhere.

{'source': 'https://www.espn.com/', 'title': 'ESPN - Serving Sports Fans. Anytime. Anywhere.', 'description': 'Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports.', 'language': 'en'}

Changed in version 0.3.14: Deprecated aload (which was not async) and implemented a native async alazy_load. Expand below for more details.

How to update aload

Instead of using aload, you can use load for synchronous loading or alazy_load for asynchronous lazy loading.

Example using load (synchronous):

docs: List[Document] = loader.load()

Example using alazy_load (asynchronous):

docs: List[Document] = []
async for doc in loader.alazy_load():
    docs.append(doc)

This is in preparation for accommodating an asynchronous aload in the future:

docs: List[Document] = await loader.aload()

Initialize loader.

Parameters:
  • web_paths (Sequence[str]) – Web paths to load from.

  • requests_per_second (int) – Max number of concurrent requests to make.

  • default_parser (str) – Default parser to use for BeautifulSoup.

  • requests_kwargs (Dict[str, Any] | None) – kwargs for requests

  • raise_for_status (bool) – Raise an exception if http status code denotes an error.

  • bs_get_text_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 get_text

  • bs_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 web page parsing

  • show_progress (bool) – Show progress bar when loading pages.

  • trust_env (bool) – set to True if using proxy to make web requests, for example using http(s)_proxy environment variables. Defaults to False.

  • web_path (str | Sequence[str])

  • header_template (dict | None)

  • verify_ssl (bool)

  • proxies (dict | None)

  • continue_on_failure (bool)

  • autoset_encoding (bool)

  • encoding (str | None)

  • session (Any)

Attributes

web_path

Methods

__init__([web_path,Β header_template,Β ...])

Initialize loader.

alazy_load()

Async lazy load text from the url(s) in web_path.

aload()

ascrape_all(urls[,Β parser])

Async fetch all urls, then return soups for all results.

fetch_all(urls)

Fetch all urls concurrently with rate limiting.

lazy_load()

Lazy load text from the url(s) in web_path.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

scrape([parser])

Scrape data from webpage and return it in BeautifulSoup format.

scrape_all(urls[,Β parser])

Fetch all urls, then return soups for all results.

__init__(web_path: str | Sequence[str] = '', header_template: dict | None = None, verify_ssl: bool = True, proxies: dict | None = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: str | None = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Dict[str, Any] | None = None, raise_for_status: bool = False, bs_get_text_kwargs: Dict[str, Any] | None = None, bs_kwargs: Dict[str, Any] | None = None, session: Any = None, *, show_progress: bool = True, trust_env: bool = False) β†’ None[source]#

Initialize loader.

Parameters:
  • web_paths (Sequence[str]) – Web paths to load from.

  • requests_per_second (int) – Max number of concurrent requests to make.

  • default_parser (str) – Default parser to use for BeautifulSoup.

  • requests_kwargs (Dict[str, Any] | None) – kwargs for requests

  • raise_for_status (bool) – Raise an exception if http status code denotes an error.

  • bs_get_text_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 get_text

  • bs_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 web page parsing

  • show_progress (bool) – Show progress bar when loading pages.

  • trust_env (bool) – set to True if using proxy to make web requests, for example using http(s)_proxy environment variables. Defaults to False.

  • web_path (str | Sequence[str])

  • header_template (dict | None)

  • verify_ssl (bool)

  • proxies (dict | None)

  • continue_on_failure (bool)

  • autoset_encoding (bool)

  • encoding (str | None)

  • session (Any)

Return type:

None

async alazy_load() β†’ AsyncIterator[Document][source]#

Async lazy load text from the url(s) in web_path.

Return type:

AsyncIterator[Document]

aload() β†’ List[Document][source]#

Deprecated since version langchain-community==0.3.14: See API reference for updated usage: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html It will not be removed until langchain-community==1.0.

Load text from the urls in web_path async into Documents.

Return type:

List[Document]

async ascrape_all(urls: List[str], parser: str | None = None) β†’ List[Any][source]#

Async fetch all urls, then return soups for all results.

Parameters:
  • urls (List[str])

  • parser (str | None)

Return type:

List[Any]

async fetch_all(urls: List[str]) β†’ Any[source]#

Fetch all urls concurrently with rate limiting.

Parameters:

urls (List[str])

Return type:

Any

lazy_load() β†’ Iterator[Document][source]#

Lazy load text from the url(s) in web_path.

Return type:

Iterator[Document]

load() β†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

load_and_split(text_splitter: TextSplitter | None = None) β†’ list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

list[Document]

scrape(parser: str | None = None) β†’ Any[source]#

Scrape data from webpage and return it in BeautifulSoup format.

Parameters:

parser (str | None)

Return type:

Any

scrape_all(urls: List[str], parser: str | None = None) β†’ List[Any][source]#

Fetch all urls, then return soups for all results.

Parameters:
  • urls (List[str])

  • parser (str | None)

Return type:

List[Any]

Examples using WebBaseLoader