PySparkDataFrameLoader#
- class langchain_community.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: SparkSession | None = None, df: Any | None = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#
Load PySpark DataFrames.
Initialize with a Spark DataFrame object.
- Parameters:
spark_session (SparkSession | None) β The SparkSession object.
df (Any | None) β The Spark DataFrame object.
page_content_column (str) β The name of the column containing the page content. Defaults to βtextβ.
fraction_of_memory (float) β The fraction of memory to use. Defaults to 0.1.
Methods
__init__
([spark_session,Β df,Β ...])Initialize with a Spark DataFrame object.
A lazy loader for Documents.
aload
()Load data into Document objects.
Gets the number of "feasible" rows for the DataFrame
A lazy loader for document content.
load
()Load from the dataframe.
load_and_split
([text_splitter])Load Documents and split into chunks.
- __init__(spark_session: SparkSession | None = None, df: Any | None = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#
Initialize with a Spark DataFrame object.
- Parameters:
spark_session (SparkSession | None) β The SparkSession object.
df (Any | None) β The Spark DataFrame object.
page_content_column (str) β The name of the column containing the page content. Defaults to βtextβ.
fraction_of_memory (float) β The fraction of memory to use. Defaults to 0.1.
- async alazy_load() AsyncIterator[Document] #
A lazy loader for Documents.
- Return type:
AsyncIterator[Document]
- get_num_rows() Tuple[int, int] [source]#
Gets the number of βfeasibleβ rows for the DataFrame
- Return type:
Tuple[int, int]
- lazy_load() Iterator[Document] [source]#
A lazy loader for document content.
- Return type:
Iterator[Document]
- load_and_split(text_splitter: TextSplitter | None = None) list[Document] #
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters:
text_splitter (Optional[TextSplitter]) β TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns:
List of Documents.
- Return type:
list[Document]
Examples using PySparkDataFrameLoader