FaunaLoader#

class langchain_community.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Sequence[str] | None = None)[source]#

Load from FaunaDB.

Parameters:
  • query (str)

  • page_content_field (str)

  • secret (str)

  • metadata_fields (Sequence[str] | None)

query#

The FQL query string to execute.

Type:

str

page_content_field#

The field that contains the content of each page.

Type:

str

secret#

The secret key for authenticating to FaunaDB.

Type:

str

metadata_fields#

Optional list of field names to include in metadata.

Type:

Optional[Sequence[str]]

Methods

__init__(query,Β page_content_field,Β secret)

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

A lazy loader for Documents.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(query: str, page_content_field: str, secret: str, metadata_fields: Sequence[str] | None = None)[source]#
Parameters:
  • query (str)

  • page_content_field (str)

  • secret (str)

  • metadata_fields (Sequence[str] | None)

async alazy_load() β†’ AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

async aload() β†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

lazy_load() β†’ Iterator[Document][source]#

A lazy loader for Documents.

Return type:

Iterator[Document]

load() β†’ list[Document]#

Load data into Document objects.

Return type:

list[Document]

load_and_split(text_splitter: TextSplitter | None = None) β†’ list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

list[Document]

Examples using FaunaLoader