Tokenizer#

class langchain_text_splitters.base.Tokenizer(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[List[int]], str], encode: Callable[[str], List[int]])[source]#

Tokenizer data class.

Attributes

Methods

__init__(chunk_overlap,Β tokens_per_chunk,Β ...)

Parameters:
  • chunk_overlap (int) –

  • tokens_per_chunk (int) –

  • decode (Callable[[List[int]], str]) –

  • encode (Callable[[str], List[int]]) –

__init__(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[List[int]], str], encode: Callable[[str], List[int]]) β†’ None#
Parameters:
  • chunk_overlap (int) –

  • tokens_per_chunk (int) –

  • decode (Callable[[List[int]], str]) –

  • encode (Callable[[str], List[int]]) –

Return type:

None