Skip to main content

How to split text by tokens

Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.

tiktoken​

note

tiktoken is a fast BPE tokenizer created by OpenAI.

We can use tiktoken to estimate tokens used. It will probably be more accurate for the OpenAI models.

  1. How the text is split: by character passed in.
  2. How the chunk size is measured: by tiktoken tokenizer.

CharacterTextSplitter, RecursiveCharacterTextSplitter, and TokenTextSplitter can be used with tiktoken directly.

%pip install --upgrade --quiet langchain-text-splitters tiktoken
from langchain_text_splitters import CharacterTextSplitter

# This is a long document we can split up.
with open("state_of_the_union.txt") as f:
state_of_the_union = f.read()
API Reference:CharacterTextSplitter

To split with a CharacterTextSplitter and then merge chunks with tiktoken, use its .from_tiktoken_encoder() method. Note that splits from this method can be larger than the chunk size measured by the tiktoken tokenizer.

The .from_tiktoken_encoder() method takes either encoding_name as an argument (e.g. cl100k_base), or the model_name (e.g. gpt-4). All additional arguments like chunk_size, chunk_overlap, and separators are used to instantiate CharacterTextSplitter:

text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
encoding_name="cl100k_base", chunk_size=100, chunk_overlap=0
)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.  

Last year COVID-19 kept us apart. This year we are finally together again.

Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.

With a duty to one another to the American people to the Constitution.

To implement a hard constraint on the chunk size, we can use RecursiveCharacterTextSplitter.from_tiktoken_encoder, where each split will be recursively split if it has a larger size:

from langchain_text_splitters import RecursiveCharacterTextSplitter

text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
model_name="gpt-4",
chunk_size=100,
chunk_overlap=0,
)

We can also load a TokenTextSplitter splitter, which works with tiktoken directly and will ensure each split is smaller than chunk size.

from langchain_text_splitters import TokenTextSplitter

text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)

texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
API Reference:TokenTextSplitter
Madam Speaker, Madam Vice President, our

Some written languages (e.g. Chinese and Japanese) have characters which encode to 2 or more tokens. Using the TokenTextSplitter directly can split the tokens for a character between two chunks causing malformed Unicode characters. Use RecursiveCharacterTextSplitter.from_tiktoken_encoder or CharacterTextSplitter.from_tiktoken_encoder to ensure chunks contain valid Unicode strings.

spaCy​

note

spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.

LangChain implements splitters based on the spaCy tokenizer.

  1. How the text is split: by spaCy tokenizer.
  2. How the chunk size is measured: by number of characters.
%pip install --upgrade --quiet  spacy
# This is a long document we can split up.
with open("state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import SpacyTextSplitter

text_splitter = SpacyTextSplitter(chunk_size=1000)

texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
API Reference:SpacyTextSplitter
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.

Members of Congress and the Cabinet.

Justices of the Supreme Court.

My fellow Americans.



Last year COVID-19 kept us apart.

This year we are finally together again.



Tonight, we meet as Democrats Republicans and Independents.

But most importantly as Americans.



With a duty to one another to the American people to the Constitution.



And with an unwavering resolve that freedom will always triumph over tyranny.



Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.

But he badly miscalculated.



He thought he could roll into Ukraine and the world would roll over.

Instead he met a wall of strength he never imagined.



He met the Ukrainian people.



From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.

SentenceTransformers​

The SentenceTransformersTokenTextSplitter is a specialized text splitter for use with the sentence-transformer models. The default behaviour is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use.

To split text and constrain token counts according to the sentence-transformers tokenizer, instantiate a SentenceTransformersTokenTextSplitter. You can optionally specify:

  • chunk_overlap: integer count of token overlap;
  • model_name: sentence-transformer model name, defaulting to "sentence-transformers/all-mpnet-base-v2";
  • tokens_per_chunk: desired token count per chunk.
from langchain_text_splitters import SentenceTransformersTokenTextSplitter

splitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)
text = "Lorem "

count_start_and_stop_tokens = 2
text_token_count = splitter.count_tokens(text=text) - count_start_and_stop_tokens
print(text_token_count)
2
token_multiplier = splitter.maximum_tokens_per_chunk // text_token_count + 1

# `text_to_split` does not fit in a single chunk
text_to_split = text * token_multiplier

print(f"tokens in text to split: {splitter.count_tokens(text=text_to_split)}")
tokens in text to split: 514
text_chunks = splitter.split_text(text=text_to_split)

print(text_chunks[1])
lorem

NLTK​

note

The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.

Rather than just splitting on "\n\n", we can use NLTK to split based on NLTK tokenizers.

  1. How the text is split: by NLTK tokenizer.
  2. How the chunk size is measured: by number of characters.
# pip install nltk
# This is a long document we can split up.
with open("state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import NLTKTextSplitter

text_splitter = NLTKTextSplitter(chunk_size=1000)
API Reference:NLTKTextSplitter
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.

Members of Congress and the Cabinet.

Justices of the Supreme Court.

My fellow Americans.

Last year COVID-19 kept us apart.

This year we are finally together again.

Tonight, we meet as Democrats Republicans and Independents.

But most importantly as Americans.

With a duty to one another to the American people to the Constitution.

And with an unwavering resolve that freedom will always triumph over tyranny.

Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.

But he badly miscalculated.

He thought he could roll into Ukraine and the world would roll over.

Instead he met a wall of strength he never imagined.

He met the Ukrainian people.

From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.

Groups of citizens blocking tanks with their bodies.

KoNLPY​

note

KoNLPy: Korean NLP in Python is is a Python package for natural language processing (NLP) of the Korean language.

Token splitting involves the segmentation of text into smaller, more manageable units called tokens. These tokens are often words, phrases, symbols, or other meaningful elements crucial for further processing and analysis. In languages like English, token splitting typically involves separating words by spaces and punctuation marks. The effectiveness of token splitting largely depends on the tokenizer's understanding of the language structure, ensuring the generation of meaningful tokens. Since tokenizers designed for the English language are not equipped to understand the unique semantic structures of other languages, such as Korean, they cannot be effectively used for Korean language processing.

Token splitting for Korean with KoNLPy's Kkma Analyzer​

In case of Korean text, KoNLPY includes at morphological analyzer called Kkma (Korean Knowledge Morpheme Analyzer). Kkma provides detailed morphological analysis of Korean text. It breaks down sentences into words and words into their respective morphemes, identifying parts of speech for each token. It can segment a block of text into individual sentences, which is particularly useful for processing long texts.

Usage Considerations​

While Kkma is renowned for its detailed analysis, it is important to note that this precision may impact processing speed. Thus, Kkma is best suited for applications where analytical depth is prioritized over rapid text processing.

# pip install konlpy
# This is a long Korean document that we want to split up into its component sentences.
with open("./your_korean_doc.txt") as f:
korean_document = f.read()
from langchain_text_splitters import KonlpyTextSplitter

text_splitter = KonlpyTextSplitter()
API Reference:KonlpyTextSplitter
texts = text_splitter.split_text(korean_document)
# The sentences are split with "\n\n" characters.
print(texts[0])
좘ν–₯μ „ μ˜›λ‚ μ— 남원에 이 λ„λ Ήμ΄λΌλŠ” λ²ΌμŠ¬μ•„μΉ˜ 아듀이 μžˆμ—ˆλ‹€.

그의 μ™Έλͺ¨λŠ” λΉ›λ‚˜λŠ” λ‹¬μ²˜λŸΌ μž˜μƒκ²Όκ³ , 그의 학식과 κΈ°μ˜ˆλŠ” 남보닀 뛰어났닀.

ν•œνŽΈ, 이 λ§ˆμ„μ—λŠ” 좘ν–₯μ΄λΌλŠ” μ ˆμ„Έ 가인이 μ‚΄κ³  μžˆμ—ˆλ‹€.

좘 ν–₯의 아름닀움은 꽃과 κ°™μ•„ λ§ˆμ„ μ‚¬λžŒλ“€ λ‘œλΆ€ν„° λ§Žμ€ μ‚¬λž‘μ„ λ°›μ•˜λ‹€.

μ–΄λŠ λ΄„λ‚ , 도령은 μΉœκ΅¬λ“€κ³Ό λ†€λŸ¬ λ‚˜κ°”λ‹€κ°€ 좘 ν–₯을 만 λ‚˜ 첫 λˆˆμ— λ°˜ν•˜κ³  λ§μ•˜λ‹€.

두 μ‚¬λžŒμ€ μ„œλ‘œ μ‚¬λž‘ν•˜κ²Œ λ˜μ—ˆκ³ , 이내 λΉ„λ°€μŠ€λŸ¬μš΄ μ‚¬λž‘μ˜ λ§Ήμ„Έλ₯Ό λ‚˜λˆ„μ—ˆλ‹€.

ν•˜μ§€λ§Œ 쒋은 날듀은 μ˜€λž˜κ°€μ§€ μ•Šμ•˜λ‹€.

λ„λ Ήμ˜ 아버지가 λ‹€λ₯Έ 곳으둜 전근을 κ°€κ²Œ λ˜μ–΄ 도령도 λ– λ‚˜ μ•Όλ§Œ ν–ˆλ‹€.

μ΄λ³„μ˜ μ•„ν”” μ†μ—μ„œλ„, 두 μ‚¬λžŒμ€ 재회λ₯Ό κΈ°μ•½ν•˜λ©° μ„œλ‘œλ₯Ό λ―Ώκ³  κΈ°λ‹€λ¦¬κΈ°λ‘œ ν–ˆλ‹€.

κ·ΈλŸ¬λ‚˜ μƒˆλ‘œ λΆ€μž„ν•œ κ΄€μ•„μ˜ μ‚¬λ˜κ°€ 좘 ν–₯의 아름닀움에 μš•μ‹¬μ„ λ‚΄ μ–΄ κ·Έλ…€μ—κ²Œ κ°•μš”λ₯Ό μ‹œμž‘ν–ˆλ‹€.

좘 ν–₯ 은 도령에 λŒ€ν•œ μžμ‹ μ˜ μ‚¬λž‘μ„ 지킀기 μœ„ν•΄, μ‚¬λ˜μ˜ μš”κ΅¬λ₯Ό λ‹¨ν˜Ένžˆ κ±°μ ˆν–ˆλ‹€.

이에 λΆ„λ…Έν•œ μ‚¬λ˜λŠ” 좘 ν–₯을 감μ˜₯에 가두고 ν˜Ήλ…ν•œ ν˜•λ²Œμ„ λ‚΄λ Έλ‹€.

μ΄μ•ΌκΈ°λŠ” 이 도령이 κ³ μœ„ 관직에 였λ₯Έ ν›„, 좘 ν–₯을 ꡬ해 λ‚΄λŠ” κ²ƒμœΌλ‘œ λλ‚œλ‹€.

두 μ‚¬λžŒμ€ 였랜 μ‹œλ ¨ 끝에 λ‹€μ‹œ λ§Œλ‚˜κ²Œ 되고, κ·Έλ“€μ˜ μ‚¬λž‘μ€ 온 세상에 μ „ν•΄ 지며 ν›„μ„Έμ—κΉŒμ§€ 이어진닀.

- 좘ν–₯μ „ (The Tale of Chunhyang)

Hugging Face tokenizer​

Hugging Face has many tokenizers.

We use Hugging Face tokenizer, the GPT2TokenizerFast to count the text length in tokens.

  1. How the text is split: by character passed in.
  2. How the chunk size is measured: by number of tokens calculated by the Hugging Face tokenizer.
from transformers import GPT2TokenizerFast

tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# This is a long document we can split up.
with open("state_of_the_union.txt") as f:
state_of_the_union = f.read()
from langchain_text_splitters import CharacterTextSplitter
API Reference:CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(
tokenizer, chunk_size=100, chunk_overlap=0
)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.  

Last year COVID-19 kept us apart. This year we are finally together again.

Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.

With a duty to one another to the American people to the Constitution.

Was this page helpful?


You can also leave detailed feedback on GitHub.