Skip to main content
Open on GitHub

Timbr

Timbr integrates natural language inputs with Timbr's ontology-driven semantic layer. Leveraging Timbr's robust ontology capabilities, the SDK integrates with Timbr data models and leverages semantic relationships and annotations, enabling users to query data using business-friendly language.

Timbr provides a pre-built SQL agent, TimbrSqlAgent, which can be used for end-to-end purposes from user prompt, through semantic SQL query generation and validation, to query execution and result analysis.

For customizations and partial usage, you can use LangChain chains and LangGraph nodes with our 5 main tools:

  • IdentifyTimbrConceptChain & IdentifyConceptNode - Identify relevant concepts from user prompts
  • GenerateTimbrSqlChain & GenerateTimbrSqlNode - Generate SQL queries from natural language prompts
  • ValidateTimbrSqlChain & ValidateSemanticSqlNode - Validate SQL queries against Timbr knowledge graph schemas
  • ExecuteTimbrQueryChain & ExecuteSemanticQueryNode - Execute (semantic and regular) SQL queries against Timbr knowledge graph databases
  • GenerateAnswerChain & GenerateResponseNode - Generate human-readable answers based on a given prompt and data rows

Additionally, langchain-timbr provides TimbrLlmConnector for manual integration with Timbr's semantic layer using LLM providers. This connector includes the following methods:

  • get_ontologies - List Timbr's semantic knowledge graphs
  • get_concepts - List selected knowledge graph ontology representation concepts
  • get_views - List selected knowledge graph ontology representation views
  • determine_concept - Identify relevant concepts from user prompts
  • generate_sql - Generate SQL queries from natural language prompts
  • validate_sql - Validate SQL queries against Timbr knowledge graph schemas
  • run_timbr_query - Execute (semantic and regular) SQL queries against Timbr knowledge graph databases
  • run_llm_query - Execute agent pipeline to determine concept, generate SQL, and run query from natural language prompt

Quickstartโ€‹

Installationโ€‹

Install the packageโ€‹

pip install langchain-timbr

Optional: Install with selected LLM providerโ€‹

Choose one of: openai, anthropic, google, azure_openai, snowflake, databricks (or 'all')

pip install 'langchain-timbr[<your selected providers, separated by comma without spaces>]'

Configurationโ€‹

Starting from langchain-timbr v2.0.0, all chains, agents, and nodes support optional environment-based configuration. You can set the following environment variables to provide default values and simplify setup for the provided tools:

Timbr Connection Parametersโ€‹

  • TIMBR_URL: Default Timbr server URL
  • TIMBR_TOKEN: Default Timbr authentication token
  • TIMBR_ONTOLOGY: Default ontology/knowledge graph name

When these environment variables are set, the corresponding parameters (url, token, ontology) become optional in all chain and agent constructors and will use the environment values as defaults.

LLM Configuration Parametersโ€‹

  • LLM_TYPE: The type of LLM provider (one of langchain_timbr LlmTypes enum: 'openai-chat', 'anthropic-chat', 'chat-google-generative-ai', 'azure-openai-chat', 'snowflake-cortex', 'chat-databricks')
  • LLM_API_KEY: The API key for authenticating with the LLM provider
  • LLM_MODEL: The model name or deployment to use
  • LLM_TEMPERATURE: Temperature setting for the LLM
  • LLM_ADDITIONAL_PARAMS: Additional parameters as dict or JSON string

When LLM environment variables are set, the llm parameter becomes optional and will use the LlmWrapper with environment configuration.

Example environment setup:

# Timbr connection
export TIMBR_URL="https://your-timbr-app.com/"
export TIMBR_TOKEN="tk_XXXXXXXXXXXXXXXXXXXXXXXX"
export TIMBR_ONTOLOGY="timbr_knowledge_graph"

# LLM configuration
export LLM_TYPE="openai-chat"
export LLM_API_KEY="your-openai-api-key"
export LLM_MODEL="gpt-4o"
export LLM_TEMPERATURE="0.1"
export LLM_ADDITIONAL_PARAMS='{"max_tokens": 1000}'

Usageโ€‹

Import and utilize your intended chain/node, or use TimbrLlmConnector to manually integrate with Timbr's semantic layer. For a complete agent working example, see the Timbr tool page.

ExecuteTimbrQueryChain exampleโ€‹

from langchain_timbr import ExecuteTimbrQueryChain

# You can use the standard LangChain ChatOpenAI/ChatAnthropic models
# or any other LLM model based on langchain_core.language_models.chat.BaseChatModel
llm = ChatOpenAI(model="gpt-4o", temperature=0, openai_api_key='open-ai-api-key')

# Optional alternative: Use Timbr's LlmWrapper, which provides generic connections to different LLM providers
from langchain_timbr import LlmWrapper, LlmTypes
llm = LlmWrapper(llm_type=LlmTypes.OpenAI, api_key="open-ai-api-key", model="gpt-4o")

execute_timbr_query_chain = ExecuteTimbrQueryChain(
llm=llm,
url="https://your-timbr-app.com/",
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX",
ontology="timbr_knowledge_graph",
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders"], # optional
views_list=["sales_view"], # optional
note="We only need sums", # optional
retries=3, # optional
should_validate_sql=True # optional
)

result = execute_timbr_query_chain.invoke({"prompt": "What are the total sales for last month?"})
rows = result["rows"]
sql = result["sql"]
concept = result["concept"]
schema = result["schema"]
error = result.get("error", None)

usage_metadata = result.get("execute_timbr_usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
generate_sql_usage = usage_metadata.get('generate_sql', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM

Multiple chains using SequentialChain exampleโ€‹

from langchain.chains import SequentialChain
from langchain_timbr import ExecuteTimbrQueryChain, GenerateAnswerChain
from langchain_openai import ChatOpenAI

# You can use the standard LangChain ChatOpenAI/ChatAnthropic models
# or any other LLM model based on langchain_core.language_models.chat.BaseChatModel
llm = ChatOpenAI(model="gpt-4o", temperature=0, openai_api_key='open-ai-api-key')

# Optional alternative: Use Timbr's LlmWrapper, which provides generic connections to different LLM providers
from langchain_timbr import LlmWrapper, LlmTypes
llm = LlmWrapper(llm_type=LlmTypes.OpenAI, api_key="open-ai-api-key", model="gpt-4o")

execute_timbr_query_chain = ExecuteTimbrQueryChain(
llm=llm,
url='https://your-timbr-app.com/',
token='tk_XXXXXXXXXXXXXXXXXXXXXXXX',
ontology='timbr_knowledge_graph',
)

generate_answer_chain = GenerateAnswerChain(
llm=llm,
url='https://your-timbr-app.com/',
token='tk_XXXXXXXXXXXXXXXXXXXXXXXX',
)

pipeline = SequentialChain(
chains=[execute_timbr_query_chain, generate_answer_chain],
input_variables=["prompt"],
output_variables=["answer", "sql"]
)

result = pipeline.invoke({"prompt": "What are the total sales for last month?"})

Additional Resourcesโ€‹