WebBaseLoader#

class langchain_community.document_loaders.web_base.WebBaseLoader(web_path: str | Sequence[str] = '', header_template: dict | None = None, verify_ssl: bool = True, proxies: dict | None = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: str | None = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Dict[str, Any] | None = None, raise_for_status: bool = False, bs_get_text_kwargs: Dict[str, Any] | None = None, bs_kwargs: Dict[str, Any] | None = None, session: Any = None, *, show_progress: bool = True)[source]#

WebBaseLoader document loader integration

Setup:

Install langchain_community.

pip install -U langchain_community
Instantiate:
from langchain_community.document_loaders import WebBaseLoader

loader = WebBaseLoader(
    web_path = "https://www.espn.com/"
    # header_template = None,
    # verify_ssl = True,
    # proxies = None,
    # continue_on_failure = False,
    # autoset_encoding = True,
    # encoding = None,
    # web_paths = (),
    # requests_per_second = 2,
    # default_parser = "html.parser",
    # requests_kwargs = None,
    # raise_for_status = False,
    # bs_get_text_kwargs = None,
    # bs_kwargs = None,
    # session = None,
    # show_progress = True,
)
Lazy load:
docs = []
docs_lazy = loader.lazy_load()

# async variant:
# docs_lazy = await loader.alazy_load()

for doc in docs_lazy:
    docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
ESPN - Serving Sports Fans. Anytime. Anywhere.

{'source': 'https://www.espn.com/', 'title': 'ESPN - Serving Sports Fans. Anytime. Anywhere.', 'description': 'Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports.', 'language': 'en'}
Async load:
docs = await loader.aload()
print(docs[0].page_content[:100])
print(docs[0].metadata)
ESPN - Serving Sports Fans. Anytime. Anywhere.

{'source': 'https://www.espn.com/', 'title': 'ESPN - Serving Sports Fans. Anytime. Anywhere.', 'description': 'Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports.', 'language': 'en'}

Initialize loader.

Parameters:
  • web_paths (Sequence[str]) – Web paths to load from.

  • requests_per_second (int) – Max number of concurrent requests to make.

  • default_parser (str) – Default parser to use for BeautifulSoup.

  • requests_kwargs (Dict[str, Any] | None) – kwargs for requests

  • raise_for_status (bool) – Raise an exception if http status code denotes an error.

  • bs_get_text_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 get_text

  • bs_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 web page parsing

  • show_progress (bool) – Show progress bar when loading pages.

  • web_path (str | Sequence[str]) –

  • header_template (dict | None) –

  • verify_ssl (bool) –

  • proxies (dict | None) –

  • continue_on_failure (bool) –

  • autoset_encoding (bool) –

  • encoding (str | None) –

  • session (Any) –

Attributes

web_path

Methods

__init__([web_path,Β header_template,Β ...])

Initialize loader.

alazy_load()

A lazy loader for Documents.

aload()

Load text from the urls in web_path async into Documents.

fetch_all(urls)

Fetch all urls concurrently with rate limiting.

lazy_load()

Lazy load text from the url(s) in web_path.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

scrape([parser])

Scrape data from webpage and return it in BeautifulSoup format.

scrape_all(urls[,Β parser])

Fetch all urls, then return soups for all results.

__init__(web_path: str | Sequence[str] = '', header_template: dict | None = None, verify_ssl: bool = True, proxies: dict | None = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: str | None = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Dict[str, Any] | None = None, raise_for_status: bool = False, bs_get_text_kwargs: Dict[str, Any] | None = None, bs_kwargs: Dict[str, Any] | None = None, session: Any = None, *, show_progress: bool = True) β†’ None[source]#

Initialize loader.

Parameters:
  • web_paths (Sequence[str]) – Web paths to load from.

  • requests_per_second (int) – Max number of concurrent requests to make.

  • default_parser (str) – Default parser to use for BeautifulSoup.

  • requests_kwargs (Dict[str, Any] | None) – kwargs for requests

  • raise_for_status (bool) – Raise an exception if http status code denotes an error.

  • bs_get_text_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 get_text

  • bs_kwargs (Dict[str, Any] | None) – kwargs for beatifulsoup4 web page parsing

  • show_progress (bool) – Show progress bar when loading pages.

  • web_path (str | Sequence[str]) –

  • header_template (dict | None) –

  • verify_ssl (bool) –

  • proxies (dict | None) –

  • continue_on_failure (bool) –

  • autoset_encoding (bool) –

  • encoding (str | None) –

  • session (Any) –

Return type:

None

async alazy_load() β†’ AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

aload() β†’ List[Document][source]#

Load text from the urls in web_path async into Documents.

Return type:

List[Document]

async fetch_all(urls: List[str]) β†’ Any[source]#

Fetch all urls concurrently with rate limiting.

Parameters:

urls (List[str]) –

Return type:

Any

lazy_load() β†’ Iterator[Document][source]#

Lazy load text from the url(s) in web_path.

Return type:

Iterator[Document]

load() β†’ List[Document]#

Load data into Document objects.

Return type:

List[Document]

load_and_split(text_splitter: TextSplitter | None = None) β†’ List[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

List[Document]

scrape(parser: str | None = None) β†’ Any[source]#

Scrape data from webpage and return it in BeautifulSoup format.

Parameters:

parser (str | None) –

Return type:

Any

scrape_all(urls: List[str], parser: str | None = None) β†’ List[Any][source]#

Fetch all urls, then return soups for all results.

Parameters:
  • urls (List[str]) –

  • parser (str | None) –

Return type:

List[Any]

Examples using WebBaseLoader