PySparkDataFrameLoader#

class langchain_community.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: SparkSession | None = None, df: Any | None = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#

Load PySpark DataFrames.

Initialize with a Spark DataFrame object.

Parameters:
  • spark_session (SparkSession | None) – The SparkSession object.

  • df (Any | None) – The Spark DataFrame object.

  • page_content_column (str) – The name of the column containing the page content. Defaults to β€œtext”.

  • fraction_of_memory (float) – The fraction of memory to use. Defaults to 0.1.

Methods

__init__([spark_session,Β df,Β ...])

Initialize with a Spark DataFrame object.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

get_num_rows()

Gets the number of "feasible" rows for the DataFrame

lazy_load()

A lazy loader for document content.

load()

Load from the dataframe.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(spark_session: SparkSession | None = None, df: Any | None = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#

Initialize with a Spark DataFrame object.

Parameters:
  • spark_session (SparkSession | None) – The SparkSession object.

  • df (Any | None) – The Spark DataFrame object.

  • page_content_column (str) – The name of the column containing the page content. Defaults to β€œtext”.

  • fraction_of_memory (float) – The fraction of memory to use. Defaults to 0.1.

async alazy_load() β†’ AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

async aload() β†’ List[Document]#

Load data into Document objects.

Return type:

List[Document]

get_num_rows() β†’ Tuple[int, int][source]#

Gets the number of β€œfeasible” rows for the DataFrame

Return type:

Tuple[int, int]

lazy_load() β†’ Iterator[Document][source]#

A lazy loader for document content.

Return type:

Iterator[Document]

load() β†’ List[Document][source]#

Load from the dataframe.

Return type:

List[Document]

load_and_split(text_splitter: TextSplitter | None = None) β†’ List[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

List[Document]

Examples using PySparkDataFrameLoader