Skip to main content
After partitioning, chunking, and summarizing, the embedding step creates arrays of numbers known as vectors, representing the text that is extracted by Unstructured. These vectors are stored or embedded next to the text itself. These vector embeddings are generated by an embedding model that is provided by an embedding provider. You typically save these embeddings in a vector store. When a user queries a retrieval-augmented generation (RAG) application, the application can use a vector database to perform a similarity search in that vector store and then return the items whose embeddings are the closest to that user’s query. Here is an example of a document element generated by Unstructured, along with its vector embeddings generated by the embedding model sentence-transformers/all-MiniLM-L6-v2 on Hugging Face:
{
    "type": "Title",
    "element_id": "fdbf5369-4485-453b-9701-1bb42c83b00b",
    "text": "THE CONSTITUTION of the United States",
    "metadata": {
        "filetype": "application/pdf",
        "languages": [
            "eng"
        ],
        "page_number": 1,
        "filename": "constitution.pdf",
        "data_source": {
            "record_locator": {
                "path": "/input/constitution.pdf"
            },
            "date_created": "1723069423.0536132",
            "date_modified": "1723069423.055078",
            "date_processed": "1725666244.571788",
            "permissions_data": [
                { 
                    "mode": 33188
                }
            ]
        }
    },
    "embeddings": [
        -0.06138836592435837,
        0.08634615689516068,
        -0.019471267238259315,
        "<full-results-omitted-for-brevity>",
        0.0895417109131813,
        0.05604064092040062,
        0.01376157347112894
    ]
}
Learn more.

Generate embeddings

To generate embeddings, choose one of the available embedding providers and models in the Select Embedding Model section of an Embedder node in a workflow. When choosing an embedding model, be sure to pay attention to the number of dimensions listed next to each model. This number must match the number of dimensions in the embeddings field of your destination connector’s table, collection, or index.
You can change a workflow’s preconfigured provider only through Custom workflow settings.

Chunk sizing and embedding models

If your workflow has an Embedder node, your workflow’s Chunker node settings must stay within the selected embedding model’s token limits. Exceeding these limits will cause workflow failures. Set your Chunker node’s Max Characters to a value at or below Unstructured’s recommended maximum chunk size for your selected embedding model, as listed in the following table’s last column.
Embedding modelDimensionsTokensChunker Max Characters*
Amazon Bedrock
Cohere Embed English10245121792
Cohere Embed Multilingual10245121792
Titan Embeddings G1 - Text1536819228672
Titan Multimodal Embeddings G11024256896
Titan Text Embeddings V21024819228672
Azure OpenAI
Text Embedding 3 Large3072819228672
Text Embedding 3 Small1536819228672
Text Embedding Ada 0021536819228672
Together AI
M2-Bert 80M 32K Retrieval768819228672
Voyage AI
Voyage 3102432000112000
Voyage 3 Large102432000112000
Voyage 3 Lite51232000112000
Voyage Code 215361600056000
Voyage Code 3102432000112000
Voyage Finance 2102432000112000
Voyage Law 210241600056000
Voyage Multimodal 3102432000112000
* This is an approximate value, determined by multiplying the embedding model’s token limit by 3.5.
I