chat_future {hellmer} | R Documentation |
Process a batch of prompts in parallel
Description
Processes a batch of chat prompts using parallel workers.
Splits prompts into chunks for processing while maintaining state.
For sequential processing, use chat_sequential()
.
Usage
chat_future(chat_model = NULL, ...)
Arguments
chat_model |
ellmer chat model object or function (e.g., |
... |
Additional arguments passed to the underlying chat model (e.g., |
Value
A batch object (S7 class) containing:
-
prompts: Original input prompts
-
responses: Raw response data for completed prompts
-
completed: Number of successfully processed prompts
-
state_path: Path where batch state is saved
-
type_spec: Type specification used for structured data
-
texts: Function to extract text responses or structured data
-
chats: Function to extract chat objects
-
progress: Function to get processing status
-
batch: Function to process a batch of prompts
Batch Method
This function provides access to the batch()
method for parallel processing of prompts.
See ?batch.future_chat
for full details of the method and its parameters.
Examples
# Create a parallel chat processor with an object
chat <- chat_future(chat_openai(system_prompt = "Reply concisely"))
# Or a function
chat <- chat_future(chat_openai, system_prompt = "Reply concisely, one sentence")
# Process a batch of prompts in parallel
batch <- chat$batch(
list(
"What is R?",
"Explain base R versus tidyverse",
"Explain vectors, lists, and data frames"
),
chunk_size = 3
)
# Process batch with echo enabled (when progress is disabled)
batch <- chat$batch(
list(
"What is R?",
"Explain base R versus tidyverse"
),
progress = FALSE,
echo = TRUE
)
# Check the progress if interrupted
batch$progress()
# Return the responses
batch$texts()
# Return the chat objects
batch$chats()