call_llm_par {LLMR} | R Documentation |
Parallel LLM Processing with Tibble-Based Experiments (Core Engine)
Description
Processes experiments from a tibble where each row contains a config and message pair.
This is the core parallel processing function. Metadata columns are preserved.
This function requires setting up the parallel environment using setup_llm_parallel
.
Usage
call_llm_par(
experiments,
simplify = TRUE,
tries = 10,
wait_seconds = 2,
backoff_factor = 3,
verbose = FALSE,
memoize = FALSE,
max_workers = NULL,
progress = FALSE,
json_output = NULL
)
Arguments
experiments |
A tibble/data.frame with required list-columns 'config' (llm_config objects) and 'messages' (character vector OR message list). |
simplify |
Whether to cbind 'experiments' to the output data frame or not. |
tries |
Integer. Number of retries for each call. Default is 10. |
wait_seconds |
Numeric. Initial wait time (seconds) before retry. Default is 2. |
backoff_factor |
Numeric. Multiplier for wait time after each failure. Default is 2. |
verbose |
Logical. If TRUE, prints progress and debug information. |
memoize |
Logical. If TRUE, enables caching for identical requests. |
max_workers |
Integer. Maximum number of parallel workers. If NULL, auto-detects. |
progress |
Logical. If TRUE, shows progress bar. |
json_output |
Deprecated. Raw JSON string is always included as raw_response_json. This parameter is kept for backward compatibility but has no effect. |
Value
A tibble containing all original columns from experiments (metadata, config, messages), plus new columns: response_text, raw_response_json (the raw JSON string from the API), success, error_message, duration (in seconds).
Parallel Workflow
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call
setup_llm_parallel()
once at the start of your script.Run one or more parallel experiments (e.g.,
call_llm_broadcast()
).Call
reset_llm_parallel()
at the end to restore sequential processing.
See Also
For setting up the environment: setup_llm_parallel
, reset_llm_parallel
.
For simpler, pre-configured parallel tasks: call_llm_broadcast
, call_llm_sweep
, call_llm_compare
.
For creating experiment designs: build_factorial_experiments
.
Examples
## Not run:
# Simple example: Compare two models on one prompt
cfg1 <- llm_config("openai", "gpt-4.1-nano", Sys.getenv("OPENAI_API_KEY"))
cfg2 <- llm_config("groq", "llama-3.3-70b-versatile", Sys.getenv("GROQ_API_KEY"))
experiments <- tibble::tibble(
model_id = c("gpt-4.1-nano", "groq-llama-3.3"),
config = list(cfg1, cfg2),
messages = "Count the number of the letter e in this word: Freundschaftsbeziehungen "
)
setup_llm_parallel(workers = 2)
results <- call_llm_par(experiments, progress = TRUE)
reset_llm_parallel()
print(results[, c("model_id", "response_text")])
## End(Not run)