AcreomLoader#

class langchain_community.document_loaders.acreom.AcreomLoader(
path: str | Path,
encoding: str = 'UTF-8',
collect_metadata: bool = True,
)[source]#

Load acreom vault from a directory.

Initialize the loader.

Attributes

FRONT_MATTER_REGEX

Regex to match front matter metadata in markdown files.

Methods

__init__(path[,ย encoding,ย collect_metadata])

Initialize the loader.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

A lazy loader for Documents.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

Parameters:
  • path (str | Path)

  • encoding (str)

  • collect_metadata (bool)

__init__(
path: str | Path,
encoding: str = 'UTF-8',
collect_metadata: bool = True,
)[source]#

Initialize the loader.

Parameters:
  • path (str | Path)

  • encoding (str)

  • collect_metadata (bool)

async alazy_load() โ†’ AsyncIterator[Document]#

A lazy loader for Documents.

Yields:

the documents.

Return type:

AsyncIterator[Document]

async aload() โ†’ list[Document]#

Load data into Document objects.

Returns:

the documents.

Return type:

list[Document]

lazy_load() โ†’ Iterator[Document][source]#

A lazy loader for Documents.

Yields:

the documents.

Return type:

Iterator[Document]

load() โ†’ list[Document]#

Load data into Document objects.

Returns:

the documents.

Return type:

list[Document]

load_and_split(
text_splitter: TextSplitter | None = None,
) โ†’ list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) โ€“ TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Raises:

ImportError โ€“ If langchain-text-splitters is not installed and no text_splitter is provided.

Returns:

List of Documents.

Return type:

list[Document]

Examples using AcreomLoader