llm_fn {LLMR} | R Documentation |
Applies an LLM prompt to every element of a vector
Description
Applies an LLM prompt to every element of a vector
Usage
llm_fn(x, prompt, .config, .system_prompt = NULL, ...)
Arguments
x |
A character vector or a data.frame/tibble. |
prompt |
A glue template string.
If |
.config |
An llm_config object. |
.system_prompt |
Optional system message (character scalar). |
... |
Passed unchanged to call_llm_broadcast (e.g.\ |
Details
Runs each prompt through call_llm_broadcast()
, which forwards the
requests to call_llm_par()
.
Internally each prompt is passed as a
plain character vector (or a
named character vector when .system_prompt
is supplied).
That core engine executes them in parallel according
to the current future plan.
For instant multi-core use, call setup_llm_parallel(workers = 4)
(or whatever
number you prefer) once per session; revert with reset_llm_parallel()
.
Value
A character vector the same length as x
.
Failed calls yield NA
.
See Also
setup_llm_parallel
,
reset_llm_parallel
,
call_llm_par
, and
llm_mutate
which is a tidy-friendly wrapper around llm_fn()
.
Examples
## --- Vector input ------------------------------------------------------
## Not run:
cfg <- llm_config(
provider = "openai",
model = "gpt-4.1-nano",
api_key = Sys.getenv("OPENAI_API_KEY"),
temperature = 0
)
words <- c("excellent", "awful", "average")
llm_fn(
words,
prompt = "Classify sentiment of '{x}' as Positive, Negative, or Neutral.",
.config = cfg,
.system_prompt = "Respond with ONE word only."
)
## --- Data-frame input inside a tidyverse pipeline ----------------------
library(dplyr)
reviews <- tibble::tibble(
id = 1:3,
review = c("Great toaster!", "Burns bread.", "It's okay.")
)
reviews |>
llm_mutate(
sentiment,
prompt = "Classify the sentiment of this review: {review}",
.config = cfg,
.system_prompt = "Respond with Positive, Negative, or Neutral."
)
## End(Not run)