Skip to main content

ZeroxPDFLoader

Overview

ZeroxPDFLoader is a document loader that leverages the Zerox library. Zerox converts PDF documents into images, processes them using a vision-capable language model, and generates a structured Markdown representation. This loader allows for asynchronous operations and provides page-level document extraction.

Integration details

ClassPackageLocalSerializableJS support
ZeroxPDFLoaderlangchain_community

Loader features

SourceDocument Lazy LoadingNative Async Support
ZeroxPDFLoader

Setup

Credentials

Appropriate credentials need to be set up in environment variables. The loader supports number of different models and model providers. See Usage header below to see few examples or Zerox documentation for a full list of supported models.

Installation

To use ZeroxPDFLoader, you need to install the zerox package. Also make sure to have langchain-community installed.

pip install zerox langchain-community

Initialization

ZeroxPDFLoader enables PDF text extraction using vision-capable language models by converting each page into an image and processing it asynchronously. To use this loader, you need to specify a model and configure any necessary environment variables for Zerox, such as API keys.

If you're working in an environment like Jupyter Notebook, you may need to handle asynchronous code by using nest_asyncio. You can set this up as follows:

import nest_asyncio
nest_asyncio.apply()
import os

# use nest_asyncio (only necessary inside of jupyter notebook)
import nest_asyncio
from langchain_community.document_loaders.pdf import ZeroxPDFLoader

nest_asyncio.apply()

# Specify the url or file path for the PDF you want to process
# In this case let's use pdf from web
file_path = "https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf"

# Set up necessary env vars for a vision model
os.environ["OPENAI_API_KEY"] = (
"zK3BAhQUmbwZNoHoOcscBwQdwi3oc3hzwJmbgdZ" ## your-api-key
)

# Initialize ZeroxPDFLoader with the desired model
loader = ZeroxPDFLoader(file_path=file_path, model="azure/gpt-4o-mini")
API Reference:ZeroxPDFLoader

Load

# Load the document and look at the first page:
documents = loader.load()
documents[0]
Document(metadata={'source': 'https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf', 'page': 1, 'num_pages': 5}, page_content='# OpenAI\n\nOpenAI is an AI research laboratory.\n\n#ai-models #ai\n\n## Revenue\n- **$1,000,000,000**  \n  2023\n\n## Valuation\n- **$28,000,000,000**  \n  2023\n\n## Growth Rate (Y/Y)\n- **400%**  \n  2023\n\n## Funding\n- **$11,300,000,000**  \n  2023\n\n---\n\n## Details\n- **Headquarters:** San Francisco, CA\n- **CEO:** Sam Altman\n\n[Visit Website](#)\n\n---\n\n## Revenue\n### ARR ($M)  | Growth\n--- | ---\n$1000M  | 456%\n$750M   | \n$500M   | \n$250M   | $36M\n$0     | $200M\n\nis on track to hit $1B in annual recurring revenue by the end of 2023, up about 400% from an estimated $200M at the end of 2022.\n\nOpenAI overall lost about $540M last year while developing ChatGPT, and those losses are expected to increase dramatically in 2023 with the growth in popularity of their consumer tools, with CEO Sam Altman remarking that OpenAI is likely to be "the most capital-intensive startup in Silicon Valley history."\n\nThe reason for that is operating ChatGPT is massively expensive. One analysis of ChatGPT put the running cost at about $700,000 per day taking into account the underlying costs of GPU hours and hardware. That amount—derived from the 175 billion parameter-large architecture of GPT-3—would be even higher with the 100 trillion parameters of GPT-4.\n\n---\n\n## Valuation\nIn April 2023, OpenAI raised its latest round of $300M at a roughly $29B valuation from Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global.\n\nAssuming OpenAI was at roughly $300M in ARR at the time, that would have given them a 96x forward revenue multiple.\n\n---\n\n## Product\n\n### ChatGPT\n| Examples                       | Capabilities                        | Limitations                        |\n|---------------------------------|-------------------------------------|------------------------------------|\n| "Explain quantum computing in simple terms" | "Remember what users said earlier in the conversation" | May occasionally generate incorrect information |\n| "What can you give me for my dad\'s birthday?" | "Allows users to follow-up questions" | Limited knowledge of world events after 2021 |\n| "How do I make an HTTP request in JavaScript?" | "Trained to provide harmless requests" |                                    |')
# Let's look at parsed first page
print(documents[0].page_content)
# OpenAI

OpenAI is an AI research laboratory.

#ai-models #ai

## Revenue
- **$1,000,000,000**
2023

## Valuation
- **$28,000,000,000**
2023

## Growth Rate (Y/Y)
- **400%**
2023

## Funding
- **$11,300,000,000**
2023

---

## Details
- **Headquarters:** San Francisco, CA
- **CEO:** Sam Altman

[Visit Website](#)

---

## Revenue
### ARR ($M) | Growth
--- | ---
$1000M | 456%
$750M |
$500M |
$250M | $36M
$0 | $200M

is on track to hit $1B in annual recurring revenue by the end of 2023, up about 400% from an estimated $200M at the end of 2022.

OpenAI overall lost about $540M last year while developing ChatGPT, and those losses are expected to increase dramatically in 2023 with the growth in popularity of their consumer tools, with CEO Sam Altman remarking that OpenAI is likely to be "the most capital-intensive startup in Silicon Valley history."

The reason for that is operating ChatGPT is massively expensive. One analysis of ChatGPT put the running cost at about $700,000 per day taking into account the underlying costs of GPU hours and hardware. That amount—derived from the 175 billion parameter-large architecture of GPT-3—would be even higher with the 100 trillion parameters of GPT-4.

---

## Valuation
In April 2023, OpenAI raised its latest round of $300M at a roughly $29B valuation from Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global.

Assuming OpenAI was at roughly $300M in ARR at the time, that would have given them a 96x forward revenue multiple.

---

## Product

### ChatGPT
| Examples | Capabilities | Limitations |
|---------------------------------|-------------------------------------|------------------------------------|
| "Explain quantum computing in simple terms" | "Remember what users said earlier in the conversation" | May occasionally generate incorrect information |
| "What can you give me for my dad's birthday?" | "Allows users to follow-up questions" | Limited knowledge of world events after 2021 |
| "How do I make an HTTP request in JavaScript?" | "Trained to provide harmless requests" | |

Lazy Load

The loader always fetches results lazily. .load() method is equivalent to .lazy_load()

API reference

ZeroxPDFLoader

This loader class initializes with a file path and model type, and supports custom configurations via zerox_kwargs for handling Zerox-specific parameters.

Arguments:

  • file_path (Union[str, Path]): Path to the PDF file.
  • model (str): Vision-capable model to use for processing in format <provider>/<model>. Some examples of valid values are:
    • model = "gpt-4o-mini" ## openai model
    • model = "azure/gpt-4o-mini"
    • model = "gemini/gpt-4o-mini"
    • model="claude-3-opus-20240229"
    • model = "vertex_ai/gemini-1.5-flash-001"
    • See more details in Zerox documentation
    • Defaults to "gpt-4o-mini".
  • **zerox_kwargs (dict): Additional Zerox-specific parameters such as API key, endpoint, etc.

Methods:

  • lazy_load: Generates an iterator of Document instances, each representing a page of the PDF, along with metadata including page number and source.

See full API documentaton here

Notes

  • Model Compatibility: Zerox supports a range of vision-capable models. Refer to Zerox's GitHub documentation for a list of supported models and configuration details.
  • Environment Variables: Make sure to set required environment variables, such as API_KEY or endpoint details, as specified in the Zerox documentation.
  • Asynchronous Processing: If you encounter errors related to event loops in Jupyter Notebooks, you may need to apply nest_asyncio as shown in the setup section.

Troubleshooting

  • RuntimeError: This event loop is already running: Use nest_asyncio.apply() to prevent asynchronous loop conflicts in environments like Jupyter.
  • Configuration Errors: Verify that the zerox_kwargs match the expected arguments for your chosen model and that all necessary environment variables are set.

Additional Resources


Was this page helpful?