fetch_azure_openai_batch {tidyllm} | R Documentation |
Fetch Results for an Azure OpenAI Batch
Description
This function retrieves the results of a completed Azure OpenAI batch and updates
the provided list of LLMMessage
objects with the responses. It aligns each
response with the original request using the custom_id
s generated in send_azure_openai_batch()
.
Usage
fetch_azure_openai_batch(
.llms,
.endpoint_url = Sys.getenv("AZURE_ENDPOINT_URL"),
.batch_id = NULL,
.dry_run = FALSE,
.max_tries = 3,
.timeout = 60
)
Arguments
.llms |
A list of |
.endpoint_url |
Base URL for the API (default: Sys.getenv("AZURE_ENDPOINT_URL")). |
.batch_id |
Character; the unique identifier for the batch. By default this is NULL
and the function will attempt to use the |
.dry_run |
Logical; if |
.max_tries |
Integer; maximum number of retries if the request fails (default: |
.timeout |
Integer; request timeout in seconds (default: |
Value
A list of updated LLMMessage
objects, each with the assistant's response added if successful.