NewsURLLoader#
- class langchain_community.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]#
Load news articles from URLs using Unstructured.
- Parameters:
urls (List[str]) β URLs to load. Each is loaded into its own document.
text_mode (bool) β If True, extract text from URL and use that for page content. Otherwise, extract raw HTML.
nlp (bool) β If True, perform NLP on the extracted contents, like providing a summary and extracting keywords.
continue_on_failure (bool) β If True, continue loading documents even if loading fails for a particular URL.
show_progress_bar (bool) β If True, use tqdm to show a loading progress bar. Requires tqdm to be installed,
pip install tqdm
.**newspaper_kwargs (Any) β Any additional named arguments to pass to newspaper.Article().
Example
from langchain_community.document_loaders import NewsURLLoader loader = NewsURLLoader( urls=["<url-1>", "<url-2>"], ) docs = loader.load()
- Newspaper reference:
Initialize with file path.
Methods
__init__
(urls[,Β text_mode,Β nlp,Β ...])Initialize with file path.
A lazy loader for Documents.
aload
()Load data into Document objects.
A lazy loader for Documents.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
- __init__(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any) None [source]#
Initialize with file path.
- Parameters:
urls (List[str])
text_mode (bool)
nlp (bool)
continue_on_failure (bool)
show_progress_bar (bool)
newspaper_kwargs (Any)
- Return type:
None
- async alazy_load() AsyncIterator[Document] #
A lazy loader for Documents.
- Return type:
AsyncIterator[Document]
- lazy_load() Iterator[Document] [source]#
A lazy loader for Documents.
- Return type:
Iterator[Document]
- load_and_split(text_splitter: TextSplitter | None = None) list[Document] #
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters:
text_splitter (Optional[TextSplitter]) β TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns:
List of Documents.
- Return type:
list[Document]
Examples using NewsURLLoader