ChatGPTLoader#

class langchain_community.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = -1)[source]#

Load conversations from exported ChatGPT data.

Initialize a class object.

Parameters:
  • log_file (str) – Path to the log file

  • num_logs (int) – Number of logs to load. If 0, load all logs.

Methods

__init__(log_file[, num_logs])

Initialize a class object.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

A lazy loader for Documents.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(log_file: str, num_logs: int = -1)[source]#

Initialize a class object.

Parameters:
  • log_file (str) – Path to the log file

  • num_logs (int) – Number of logs to load. If 0, load all logs.

async alazy_load() → AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

async aload() → list[Document]#

Load data into Document objects.

Return type:

list[Document]

lazy_load() → Iterator[Document]#

A lazy loader for Documents.

Return type:

Iterator[Document]

load() → List[Document][source]#

Load data into Document objects.

Return type:

List[Document]

load_and_split(text_splitter: TextSplitter | None = None) → list[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

list[Document]

Examples using ChatGPTLoader