call_llm_sweep {LLMR} | R Documentation |
Parallel API calls: Parameter Sweep - Vary One Parameter, Fixed Message
Description
Sweeps through different values of a single parameter while keeping the message constant.
Perfect for hyperparameter tuning, temperature experiments, etc.
This function requires setting up the parallel environment using setup_llm_parallel
.
Usage
call_llm_sweep(base_config, param_name, param_values, messages, ...)
Arguments
base_config |
Base llm_config object to modify. |
param_name |
Character. Name of the parameter to vary (e.g., "temperature", "max_tokens"). |
param_values |
Vector. Values to test for the parameter. |
messages |
A character vector or a list of message objects (same for all calls). |
... |
Additional arguments passed to |
Value
A tibble with columns: swept_param_name, the varied parameter column, provider, model, all other model parameters, response_text, raw_response_json, success, error_message.
Parallel Workflow
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call
setup_llm_parallel()
once at the start of your script.Run one or more parallel experiments (e.g.,
call_llm_broadcast()
).Call
reset_llm_parallel()
at the end to restore sequential processing.
See Also
setup_llm_parallel
, reset_llm_parallel
Examples
## Not run:
# Temperature sweep
config <- llm_config(provider = "openai", model = "gpt-4.1-nano",
api_key = Sys.getenv("OPENAI_API_KEY"))
messages <- "What is 15 * 23?"
temperatures <- c(0, 0.3, 0.7, 1.0, 1.5)
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_sweep(config, "temperature", temperatures, messages)
results |> dplyr::select(temperature, response_text)
reset_llm_parallel(verbose = TRUE)
## End(Not run)