UnstructuredLakeFSLoader#
- class langchain_community.document_loaders.lakefs.UnstructuredLakeFSLoader(url: str, repo: str, ref: str = 'main', path: str = '', presign: bool = True, **unstructured_kwargs: Any)[source]#
Load from lakeFS as unstructured data.
Initialize UnstructuredLakeFSLoader.
Args:
- Parameters:
lakefs_access_key
lakefs_secret_key
lakefs_endpoint
repo (str)
ref (str)
url (str)
path (str)
presign (bool)
unstructured_kwargs (Any)
Methods
__init__
(url, repo[, ref, path, presign])Initialize UnstructuredLakeFSLoader.
A lazy loader for Documents.
aload
()Load data into Document objects.
Load file.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
- __init__(url: str, repo: str, ref: str = 'main', path: str = '', presign: bool = True, **unstructured_kwargs: Any)[source]#
Initialize UnstructuredLakeFSLoader.
Args:
- Parameters:
lakefs_access_key
lakefs_secret_key
lakefs_endpoint
repo (str)
ref (str)
url (str)
path (str)
presign (bool)
unstructured_kwargs (Any)
- async alazy_load() AsyncIterator[Document] #
A lazy loader for Documents.
- Return type:
AsyncIterator[Document]
- load_and_split(text_splitter: TextSplitter | None = None) list[Document] #
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters:
text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns:
List of Documents.
- Return type:
list[Document]