Skip to main content


Spider is the fastest and most affordable crawler and scraper that returns LLM-ready data.


pip install spider-client


To use spider you need to have an API key from

from langchain_community.document_loaders import SpiderLoader

loader = SpiderLoader(
mode="scrape", # if no API key is provided it looks for SPIDER_API_KEY in env

data = loader.load()
API Reference:SpiderLoader
[Document(page_content='Spider - Fastest Web Crawler built for AI Agents and Large Language Models[Spider v1 Logo Spider ](/)The World\'s Fastest and Cheapest Crawler API==========View Demo* Basic* StreamingExample requestPythonCopy```import requests, osheaders = {    \'Authorization\': os.environ["SPIDER_API_KEY"],    \'Content-Type\': \'application/json\',}json_data = {"limit":50,"url":""}response =\'\',  headers=headers,  json=json_data)print(response.json())```Example ResponseScrape with no headaches----------* Proxy rotations* Agent headers* Avoid anti-bot detections* Headless chrome* Markdown LLM ResponsesThe Fastest Web Crawler----------* Powered by [spider-rs](* Do 20,000 pages in seconds* Full concurrency* Powerful and simple API* Cost effectiveScrape Anything with AI----------* Custom scripting browser* Custom data extraction* Data pipelines* Detailed insights* Advanced labeling[API](/docs/api) [Price](/credits/new) [Guides](/guides) [About](/about) [Docs]( [Privacy](/privacy) [Terms](/eula)© 2024 Spider from A11yWatchTheme Light Dark Toggle Theme [GitHubGithub](', metadata={'description': 'Collect data rapidly from any website. Seamlessly scrape websites and get data tailored for LLM workloads.', 'domain': '', 'extracted_data': None, 'file_size': 33743, 'keywords': None, 'pathname': '/', 'resource_type': 'html', 'title': 'Spider - Fastest Web Crawler built for AI Agents and Large Language Models', 'url': '48f1bc3c-3fbb-408a-865b-c191a1bb1f48/', 'user_id': '48f1bc3c-3fbb-408a-865b-c191a1bb1f48'})]


  • scrape: Default mode that scrapes a single URL
  • crawl: Crawl all subpages of the domain url provided

Crawler options

The params parameter is a dictionary that can be passed to the loader. See the Spider documentation to see all available parameters

Was this page helpful?

You can also leave detailed feedback on GitHub.