get_batched_embeddings {LLMR} | R Documentation |
Generate Embeddings in Batches
Description
A wrapper function that processes a list of texts in batches to generate embeddings,
avoiding rate limits. This function calls call_llm_robust
for each
batch and stitches the results together.
Usage
get_batched_embeddings(texts, embed_config, batch_size = 5, verbose = TRUE)
Arguments
texts |
Character vector of texts to embed. |
embed_config |
An |
batch_size |
Integer. Number of texts to process in each batch. Default is 5. |
verbose |
Logical. If TRUE, prints progress messages. Default is TRUE. |
Value
A numeric matrix where each row is an embedding vector for the corresponding text. If embedding fails for certain texts, those rows will be filled with NA values. The matrix will always have the same number of rows as the input texts. Returns NULL if no embeddings were successfully generated.
Examples
## Not run:
# Basic usage
texts <- c("Hello world", "How are you?", "Machine learning is great")
embed_cfg <- llm_config(
provider = "voyage",
model = "voyage-3-large",
api_key = Sys.getenv("VOYAGE_KEY")
)
embeddings <- get_batched_embeddings(
texts = texts,
embed_config = embed_cfg,
batch_size = 2
)
## End(Not run)
[Package LLMR version 0.3.0 Index]