Skip to main content
Ctrl+K

๐Ÿฆœ๐Ÿ”— LangChain 0.0.126

Getting Started

  • Quickstart Guide

Modules

  • Models
    • LLMs
      • Getting Started
      • Generic Functionality
        • How to use the async API for LLMs
        • How to write a custom LLM wrapper
        • How (and why) to use the fake LLM
        • How to cache LLM calls
        • How to serialize LLM classes
        • How to stream LLM and Chat Model responses
        • How to track token usage
      • Integrations
        • AI21
        • Aleph Alpha
        • Anthropic
        • Azure OpenAI LLM Example
        • Banana
        • CerebriumAI LLM Example
        • Cohere
        • DeepInfra LLM Example
        • ForefrontAI LLM Example
        • GooseAI LLM Example
        • Hugging Face Hub
        • Manifest
        • Modal
        • OpenAI
        • Petals LLM Example
        • PromptLayer OpenAI
        • Replicate
        • SageMakerEndpoint
        • Self-Hosted Models via Runhouse
        • StochasticAI
        • Writer
      • Reference
    • Chat Models
      • Getting Started
      • How-To Guides
        • How to use few shot examples
        • How to stream responses
      • Integrations
        • Azure
        • OpenAI
        • PromptLayer ChatOpenAI
    • Text Embedding Models
      • AzureOpenAI
      • Cohere
      • Fake Embeddings
      • Hugging Face Hub
      • InstructEmbeddings
      • Jina
      • OpenAI
      • SageMaker Endpoint Embeddings
      • Self Hosted Embeddings
      • TensorflowHub
  • Prompts
    • Prompt Templates
      • Getting Started
      • How-To Guides
        • How to create a custom prompt template
        • How to create a prompt template that uses few shot examples
        • How to work with partial Prompt Templates
        • How to serialize prompts
      • Reference
        • PromptTemplates
        • Example Selector
    • Chat Prompt Template
    • Example Selectors
      • How to create a custom example selector
      • LengthBased ExampleSelector
      • Maximal Marginal Relevance ExampleSelector
      • NGram Overlap ExampleSelector
      • Similarity ExampleSelector
    • Output Parsers
      • Output Parsers
      • CommaSeparatedListOutputParser
      • OutputFixingParser
      • PydanticOutputParser
      • RetryOutputParser
      • Structured Output Parser
  • Indexes
    • Getting Started
    • Document Loaders
      • CoNLL-U
      • Airbyte JSON
      • AZLyrics
      • Azure Blob Storage Container
      • Azure Blob Storage File
      • BigQuery Loader
      • Blackboard
      • College Confidential
      • Copy Paste
      • CSV Loader
      • DataFrame Loader
      • Directory Loader
      • DuckDB Loader
      • Email
      • EverNote
      • Facebook Chat
      • Figma
      • GCS Directory
      • GCS File Storage
      • GitBook
      • Google Drive
      • Gutenberg
      • Hacker News
      • HTML
      • iFixit
      • Images
      • IMSDb
      • Markdown
      • Notebook
      • Notion
      • Notion DB Loader
      • Obsidian
      • PDF
      • PowerPoint
      • ReadTheDocs Documentation
      • Roam
      • s3 Directory
      • s3 File
      • Sitemap Loader
      • Subtitle Files
      • Telegram
      • Unstructured File Loader
      • URL
      • Web Base
      • WhatsApp Chat
      • Word Documents
      • YouTube
    • Text Splitters
      • Getting Started
      • Character Text Splitter
      • HuggingFace Length Function
      • Latex Text Splitter
      • Markdown Text Splitter
      • NLTK Text Splitter
      • Python Code Text Splitter
      • RecursiveCharacterTextSplitter
      • Spacy Text Splitter
      • tiktoken (OpenAI) Length Function
      • TiktokenText Splitter
    • Vectorstores
      • Getting Started
      • AtlasDB
      • Chroma
      • Deep Lake
      • ElasticSearch
      • FAISS
      • Milvus
      • OpenSearch
      • PGVector
      • Pinecone
      • Qdrant
      • Redis
      • Weaviate
    • Retrievers
      • ChatGPT Plugin Retriever
      • VectorStore Retriever
  • Memory
    • Getting Started
    • How-To Guides
      • ConversationBufferMemory
      • ConversationBufferWindowMemory
      • Entity Memory
      • Conversation Knowledge Graph Memory
      • ConversationSummaryMemory
      • ConversationSummaryBufferMemory
      • ConversationTokenBufferMemory
      • How to add Memory to an LLMChain
      • How to add memory to a Multi-Input Chain
      • How to add Memory to an Agent
      • Adding Message Memory backed by a database to an Agent
      • How to customize conversational memory
      • How to create a custom Memory class
      • How to use multiple memory classes in the same chain
      • Redis Chat Message History
  • Chains
    • Getting Started
    • How-To Guides
      • Async API for Chain
      • Loading from LangChainHub
      • LLM Chain
      • Sequential Chains
      • Serialization
      • Transformation Chain
      • Analyze Document
      • Chat Index
      • Graph QA
      • Hypothetical Document Embeddings
      • Question Answering with Sources
      • Question Answering
      • Summarization
      • Retrieval Question/Answering
      • Retrieval Question Answering with Sources
      • Vector DB Text Generation
      • API Chains
      • Self-Critique Chain with Constitutional AI
      • BashChain
      • LLMCheckerChain
      • LLM Math
      • LLMRequestsChain
      • LLMSummarizationCheckerChain
      • Moderation
      • PAL
      • SQLite example
    • Reference
  • Agents
    • Getting Started
    • Tools
      • Getting Started
      • Defining Custom Tools
      • Multi Input Tools
      • Bash
      • Bing Search
      • ChatGPT Plugins
      • Google Search
      • Google Serper API
      • Human as a tool
      • IFTTT WebHooks
      • OpenWeatherMap API
      • Python REPL
      • Requests
      • Search Tools
      • SearxNG Search API
      • SerpAPI
      • Wolfram Alpha
      • Zapier Natural Language Actions API
    • Agents
      • Agent Types
      • Custom Agent
      • Conversation Agent (for Chat Models)
      • Conversation Agent
      • MRKL
      • MRKL Chat
      • ReAct
      • Self Ask With Search
    • Toolkits
      • CSV Agent
      • JSON Agent
      • OpenAPI Agent
      • Pandas Dataframe Agent
      • Python Agent
      • SQL Database Agent
      • Vectorstore Agent
    • Agent Executors
      • How to combine agents and vectorstores
      • How to use the async API for Agents
      • How to create ChatGPT Clone
      • How to access intermediate steps
      • How to cap the max number of iterations
      • How to add SharedMemory to an Agent and its Tools

Use Cases

  • Personal Assistants
  • Question Answering over Docs
  • Chatbots
  • Querying Tabular Data
  • Interacting with APIs
  • Summarization
  • Extraction
  • Evaluation
    • Agent Benchmarking: Search + Calculator
    • Agent VectorDB Question Answering Benchmarking
    • Benchmarking Template
    • Data Augmented Question Answering
    • Using Hugging Face Datasets
    • LLM Math
    • Question Answering Benchmarking: Paul Graham Essay
    • Question Answering Benchmarking: State of the Union Address
    • QA Generation
    • Question Answering
    • SQL Question Answering Benchmarking: Chinook

Reference

  • Installation
  • Integrations
  • API References
    • Prompts
      • PromptTemplates
      • Example Selector
    • Utilities
      • Python REPL
      • SerpAPI
      • SearxNG Search
      • Docstore
      • Text Splitter
      • Embeddings
      • VectorStores
    • Chains
    • Agents

Ecosystem

  • LangChain Ecosystem
    • AI21 Labs
    • AtlasDB
    • Banana
    • CerebriumAI
    • Chroma
    • Cohere
    • DeepInfra
    • Deep Lake
    • ForefrontAI
    • Google Search Wrapper
    • Google Serper Wrapper
    • GooseAI
    • Graphsignal
    • Hazy Research
    • Helicone
    • Hugging Face
    • Jina
    • Milvus
    • Modal
    • NLPCloud
    • OpenAI
    • OpenSearch
    • Petals
    • PGVector
    • Pinecone
    • PromptLayer
    • Qdrant
    • Replicate
    • Runhouse
    • SearxNG Search API
    • SerpAPI
    • StochasticAI
    • Unstructured
    • Weights & Biases
    • Weaviate
    • Wolfram Alpha Wrapper
    • Writer

Additional Resources

  • LangChainHub
  • Glossary
  • LangChain Gallery
  • Deployments
  • Tracing
  • Discord
  • Production Support
  • .rst

SearxNG Search

Contents

  • Quick Start
  • Searching
  • Engine Parameters
  • Search Tips

SearxNG Search#

Utility for using SearxNG meta search API.

SearxNG is a privacy-friendly free metasearch engine that aggregates results from multiple search engines and databases and supports the OpenSearch specification.

More detailes on the installtion instructions here.

For the search API refer to https://docs.searxng.org/dev/search_api.html

Quick Start#

In order to use this utility you need to provide the searx host. This can be done by passing the named parameter searx_host or exporting the environment variable SEARX_HOST. Note: this is the only required parameter.

Then create a searx search instance like this:

from langchain.utilities import SearxSearchWrapper

# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'

search = SearxSearchWrapper(searx_host=searx_host)

You can now use the search instance to query the searx API.

Searching#

Use the run() and results() methods to query the searx API. Other methods are are available for convenience.

SearxResults is a convenience wrapper around the raw json result.

Example usage of the run method to make a search:

s.run(query="what is the best search engine?")

Engine Parameters#

You can pass any accepted searx search API parameters to the SearxSearchWrapper instance.

In the following example we are using the engines and the language parameters:

# assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
                    language='es')

Search Tips#

Searx offers a special search syntax that can also be used instead of passing engine parameters.

For example the following query:

s = SearxSearchWrapper("langchain library", engines=['github'])

# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")

In some situations you might want to pass an extra string to the search query. For example when the run() method is called by an agent. The search suffix can also be used as a way to pass extra parameters to searx or the underlying search engines.

# select the github engine and pass the search suffix
s = SearchWrapper("langchain library", query_suffix="!gh")


s = SearchWrapper("langchain library")
# select github the conventional google search syntax
s.run("large language models", query_suffix="site:github.com")

NOTE: A search suffix can be defined on both the instance and the method level. The resulting query will be the concatenation of the two with the former taking precedence.

See SearxNG Configured Engines and SearxNG Search Syntax for more details.

Notes

This wrapper is based on the SearxNG fork searxng/searxng which is better maintained than the original Searx project and offers more features.

Public searxNG instances often use a rate limiter for API usage, so you might want to use a self hosted instance and disable the rate limiter.

If you are self-hosting an instance you can customize the rate limiter for your own network as described here.

For a list of public SearxNG instances see https://searx.space/

class langchain.utilities.searx_search.SearxResults(data: str)[source]#

Dict like wrapper around search api results.

property answers: Any#

Helper accessor on the json result.

pydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]#

Wrapper for Searx API.

To use you need to provide the searx host by passing the named parameter searx_host or exporting the environment variable SEARX_HOST.

In some situations you might want to disable SSL verification, for example if you are running searx locally. You can do this by passing the named parameter unsecure. You can also pass the host url scheme as http to disable SSL.

Example

from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:
from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
                                        unsecure=True)
Validators
  • disable_ssl_warnings ยป unsecure

  • validate_params ยป all fields

field aiosession: Optional[Any] = None#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#

Asynchronously query with json results.

Uses aiohttp. See results for more info.

async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#

Asynchronously version of run.

results(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#

Run query through Searx API and returns the results with metadata.

Parameters
  • query โ€“ The query to search for.

  • query_suffix โ€“ Extra suffix appended to the query.

  • num_results โ€“ Limit the number of results to return.

  • engines โ€“ List of engines to use for the query.

  • **kwargs โ€“ extra parameters to pass to the searx API.

Returns

{

snippet: The description of the result.

title: The title of the result.

link: The link to the result.

engines: The engines used for the result.

category: Searx category of the result.

}

Return type

Dict with the following keys

run(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#

Run query through Searx API and parse results.

You can pass any other params to the searx query API.

Parameters
  • query โ€“ The query to search for.

  • query_suffix โ€“ Extra suffix appended to the query.

  • engines โ€“ List of engines to use for the query.

  • **kwargs โ€“ extra parameters to pass to the searx API.

Returns

The result of the query.

Return type

str

Raises

ValueError โ€“ If an error occured with the query.

Example

This will make a query to the qwant engine:

from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")

# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")

previous

SerpAPI

next

Docstore

Contents
  • Quick Start
  • Searching
  • Engine Parameters
  • Search Tips

By Harrison Chase

ยฉ Copyright 2023, Harrison Chase.

Last updated on Mar 29, 2023.