codestral {codestral} | R Documentation |
Fill in the middle with Codestral
Description
This function completes a given prompt using the Codestral API. It supports different models for fill-in-the-middle, chat with Codestral, and chat with Codestral Mamba. The function relies on environment variables for some parameters.
Usage
codestral(
prompt,
mistral_apikey = Sys.getenv(x = "R_MISTRAL_APIKEY"),
codestral_apikey = Sys.getenv(x = "R_CODESTRAL_APIKEY"),
fim_model = Sys.getenv(x = "R_CODESTRAL_FIM_MODEL"),
chat_model = Sys.getenv(x = "R_CODESTRAL_CHAT_MODEL"),
mamba_model = Sys.getenv(x = "R_MAMBA_CHAT_MODEL"),
temperature = as.integer(Sys.getenv(x = "R_CODESTRAL_TEMPERATURE")),
max_tokens_FIM = Sys.getenv(x = "R_CODESTRAL_MAX_TOKENS_FIM"),
max_tokens_chat = Sys.getenv(x = "R_CODESTRAL_MAX_TOKENS_CHAT"),
role_content = Sys.getenv(x = "R_CODESTRAL_ROLE_CONTENT"),
suffix = ""
)
Arguments
prompt |
The prompt to complete. |
mistral_apikey , codestral_apikey |
The API keys to use for accessing
Codestral Mamba and Codestral. Default to the value of the
|
fim_model |
The model to use for fill-in-the-middle. Defaults to the
value of the |
chat_model |
The model to use for chat with Codestral. Defaults to the
value of the |
mamba_model |
The model to use for chat with Codestral Mamba. Defaults to the
value of the |
temperature |
The temperature to use. Defaults to the value of the
|
max_tokens_FIM , max_tokens_chat |
Integers giving the maximum number of
tokens to generate for FIM and chat. Defaults to the value of the
|
role_content |
The role content to use. Defaults to the value of the
|
suffix |
The suffix to use. Defaults to an empty string. |
Value
A character string containing the completed text.