SentenceTransformersTokenTextSplitter#
- class langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: int | None = None, **kwargs: Any)[source]#
Splitting text to tokens using sentence model tokenizer.
Create a new TextSplitter.
Methods
__init__
([chunk_overlap, model_name, ...])Create a new TextSplitter.
atransform_documents
(documents, **kwargs)Asynchronously transform a list of documents.
count_tokens
(*, text)Counts the number of tokens in the given text.
create_documents
(texts[, metadatas])Create documents from a list of texts.
from_huggingface_tokenizer
(tokenizer, **kwargs)Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder
([encoding_name, ...])Text splitter that uses tiktoken encoder to count length.
split_documents
(documents)Split documents.
split_text
(text)Splits the input text into smaller components by splitting text on tokens.
transform_documents
(documents, **kwargs)Transform sequence of documents by splitting them.
- Parameters:
chunk_overlap (int)
model_name (str)
tokens_per_chunk (Optional[int])
kwargs (Any)
- __init__(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: int | None = None, **kwargs: Any) None [source]#
Create a new TextSplitter.
- Parameters:
chunk_overlap (int)
model_name (str)
tokens_per_chunk (int | None)
kwargs (Any)
- Return type:
None
- async atransform_documents(documents: Sequence[Document], **kwargs: Any) Sequence[Document] #
Asynchronously transform a list of documents.
- count_tokens(*, text: str) int [source]#
Counts the number of tokens in the given text.
This method encodes the input text using a private _encode method and calculates the total number of tokens in the encoded result.
- Parameters:
text (str) – The input text for which the token count is calculated.
- Returns:
The number of tokens in the encoded text.
- Return type:
int
- create_documents(texts: List[str], metadatas: List[dict] | None = None) List[Document] #
Create documents from a list of texts.
- Parameters:
texts (List[str])
metadatas (List[dict] | None)
- Return type:
List[Document]
- classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) TextSplitter #
Text splitter that uses HuggingFace tokenizer to count length.
- Parameters:
tokenizer (Any)
kwargs (Any)
- Return type:
- classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: str | None = None, allowed_special: Literal['all'] | AbstractSet[str] = {}, disallowed_special: Literal['all'] | Collection[str] = 'all', **kwargs: Any) TS #
Text splitter that uses tiktoken encoder to count length.
- Parameters:
encoding_name (str)
model_name (str | None)
allowed_special (Literal['all'] | ~typing.AbstractSet[str])
disallowed_special (Literal['all'] | ~typing.Collection[str])
kwargs (Any)
- Return type:
TS
- split_text(text: str) List[str] [source]#
Splits the input text into smaller components by splitting text on tokens.
This method encodes the input text using a private _encode method, then strips the start and stop token IDs from the encoded result. It returns the processed segments as a list of strings.
- Parameters:
text (str) – The input text to be split.
- Returns:
A list of string components derived from the input text after encoding and processing.
- Return type:
List[str]
Examples using SentenceTransformersTokenTextSplitter