Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the main documentation.
|Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.
|Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
|Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
|Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
|Analyze a single long document.
|Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
|Implement autogpt for finding winning marathon times.
|Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
|Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
|Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
|Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
|Analyze its own code base with the help of gpt and activeloop's deep lake.
|Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
|Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the
|Connect to databricks runtimes and databricks sql.
|Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
|Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
|Structured Data Extraction with OpenAI Tools
|Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
|Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
|Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
|Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face.
|Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries.
|Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences.
|Perform simple filesystem commands using language learning models (llms) and a bash process.
|Create a self-checking chain using the llmcheckerchain function.
|Solve complex word math problems using language models and python repls.
|Check the accuracy of text summaries, with the option to run the checker multiple times for improved results.
|Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics.
|Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly.
|Generate multi-modal outputs, specifically images and text.
|Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents.
|Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
|Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
|Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
|Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
|Explore new functionality released alongside the V1 release of the OpenAI Python library.
|Create multi-agent simulations with simulated environments using the petting zoo library.
|Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
|Retrieve and query company press release data powered by Kay.ai.
|Implement program-aided language models as described in the provided research paper.
|Different ways to get a model to cite its sources.
|Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
|Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
|Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
|Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
|Query a large language model using the tree of thought technique.
|Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.
|Simulate multi-agent dialogues where the agents can utilize various tools.
|Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
|Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.