llm_chat_session {LLMR} | R Documentation |
Stateful chat session constructor
Description
Stateful chat session constructor
Coerce a chat session to a data frame
Summary statistics for a chat session
Display the first part of a chat session
Display the last part of a chat session
Print a chat session object
Usage
chat_session(config, system = NULL, ...)
## S3 method for class 'llm_chat_session'
as.data.frame(x, ...)
## S3 method for class 'llm_chat_session'
summary(object, ...)
## S3 method for class 'llm_chat_session'
head(x, n = 6L, width = getOption("width") - 15, ...)
## S3 method for class 'llm_chat_session'
tail(x, n = 6L, width = getOption("width") - 15, ...)
## S3 method for class 'llm_chat_session'
print(x, width = getOption("width") - 15, ...)
Arguments
config |
An |
system |
Optional system prompt inserted once at the beginning. |
... |
Arguments passed to other methods. For |
x , object |
An |
n |
Number of turns to display. |
width |
Character width for truncating long messages. |
Details
The chat_session
object provides a simple way to hold a conversation with
a generative model. It wraps call_llm_robust
to benefit from
retry logic, caching, and error logging.
Value
For chat_session()
, an object of class llm_chat_session
.
For other methods, the return value is described by their respective titles.
How it works
A private environment stores the running list of
list(role, content)
messages.At each
$send()
the history is sent in full to the model.Provider-agnostic token counts are extracted from the JSON response (fields are detected by name, so new providers continue to work).
Public methods
$send(text, ..., role = "user")
-
Append a message (default role
"user"
), query the model, print the assistant’s reply, and invisibly return it. $history()
Raw list of messages.
$history_df()
Two-column data frame (
role
,content
).$tokens_sent()
/$tokens_received()
Running token totals.
$reset()
Clear history (retains the optional system message).
See Also
llm_config
to create the configuration object.
call_llm_robust
for single, stateless API calls.
llm_fn
for applying a prompt to many items in a vector or data frame.
Examples
## Not run:
cfg <- llm_config("openai", "gpt-4o-mini", Sys.getenv("OPENAI_API_KEY"))
chat <- chat_session(cfg, system = "Be concise.")
chat$send("Who invented the moon?")
chat$send("Explain why in one short sentence.")
# Using S3 methods
chat # print() shows a summary and first 10 turns
summary(chat) # Get session statistics
tail(chat, 2) # See the last 2 turns of the conversation
df <- as.data.frame(chat) # Convert the full history to a data frame
## End(Not run)