call_openai_chat {PacketLLM} | R Documentation |
Call OpenAI API
Description
Function sends the conversation history to the OpenAI API and returns the model's response.
For models in simplified_models_list
, it does NOT send the temperature
parameter.
Usage
call_openai_chat(messages, model, temperature = 0.5)
Arguments
messages |
List of messages (each message is a list with 'role' and 'content' fields). |
model |
OpenAI model to use. |
temperature |
Temperature parameter (used only for models that support it). |
Value
Character string containing the model's response text, or NULL
if the
response structure is unexpected. If an API or HTTP error occurs,
the function stops execution with an error message.
Examples
## Not run:
# This example requires the OPENAI_API_KEY environment variable to be set
# and requires internet access to reach the OpenAI API.
# Before running, ensure the key is set, e.g., using:
# Sys.setenv(OPENAI_API_KEY = "your_actual_openai_api_key")
# Remember to replace "your_actual_openai_api_key" with your real key.
# 1. Define the conversation history
example_messages <- list(
list(role = "system", content = "You are a helpful assistant providing concise answers."),
list(role = "user", content = "What is the main purpose of the 'httr' package in R?")
)
# 2. Choose a model (ensure it's in available_openai_models)
selected_model <- "gpt-4o-mini" # Supports temperature
# 3. Call the API using tryCatch for safety
api_response <- tryCatch({
call_openai_chat(
messages = example_messages,
model = selected_model,
temperature = 0.7
)
}, error = function(e) {
# Handle potential errors (API key missing, network issues, API errors)
paste("API call failed:", e$message)
})
# 4. Print the response (or error message)
print(api_response)
# Example with a simplified model (omits temperature)
selected_model_simple <- "o3-mini" # Does not support temperature
# Check if this model is actually available in your package installation
if(selected_model_simple %in% PacketLLM::available_openai_models) {
api_response_simple <- tryCatch({
call_openai_chat(
messages = example_messages,
model = selected_model_simple
# Temperature argument is ignored internally by the function
)
}, error = function(e) {
paste("API call failed:", e$message)
})
print(api_response_simple)
} else {
# Use message() for console output that can be suppressed if needed
message(paste("Skipping simplified model example:",
selected_model_simple, "not in available_openai_models."))
}
## End(Not run)
[Package PacketLLM version 0.1.0 Index]