โœ‚๏ธText splitters

When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What โ€œsemantically relatedโ€ means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following:

  1. Split the text up into small, semantically meaningful chunks (often sentences).

  2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).

  3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there two different axes along which you can customize your text splitter:

  4. How the text is split

  5. How the chunk size is measured

Recursive Character

This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.

  1. How the text is split: by list of characters

  2. How the chunk size is measured: by number of characters

Last updated