run_m {binaryRL} | R Documentation |
Step 1: Building reinforcement learning model
Description
This function is designed to construct and customize reinforcement learning models.
Items for model construction:
-
Data Input and Specification: You must provide the raw dataset for analysis. Crucially, you need to inform the
run_m
function about the corresponding column names within your dataset (e.g.,Mason_2024_Exp1
,Mason_2024_Exp2
) This is a game, so it's critical that your dataset includes rewards for both the human-chosen option and the unchosen options. -
Customizable RL Models: This function allows you to define and adjust the number of free parameters to create various reinforcement learning models.
-
Value Function:
-
Learning Rate: By adjusting the number of
eta
, you can construct basic reinforcement learning models such as Temporal Difference (TD) and Risk Sensitive Temporal Difference (RSTD). You can also directly adjustfunc_eta
to define your own custom learning rate function. -
Utility Function: You can directly adjust the form of
func_gamma
to incorporate the principles of Kahneman's Prospect Theory. Currently, the built-infunc_gamma
only takes the form of a power function, consistent with Stevens' Power Law.
-
-
Exploration–Exploitation Trade-off:
-
Initial Values: This involves setting the initial expected value for each option when it hasn't been chosen yet. A higher initial value encourages exploration.
-
Epsilon: Adjusting the
threshold
,epsilon
andlambda
parameters can lead to exploration strategies such as epsilon-first, epsilon-greedy, or epsilon-decreasing. -
Upper-Confidence-Bound: By adjusting
pi
, it controls the degree of exploration by scaling the uncertainty bonus given to less-explored options. -
Soft-Max: By adjusting the inverse temperature parameter
tau
, this controls the agent's sensitivity to value differences. A higher value of tau means greater emphasis on value differences, leading to more exploitation. A smaller value of tau indicates a greater tendency towards exploration.
-
-
-
Objective Function Format for Optimization: Once your model is defined in
run_m
, it must be structured as an objective function that acceptsparams
as input and returns a loss value (typicallylogL
). This format ensures compatibility with the algorithm package, which uses it to estimate optimal parameters. For an example of a standard objective function format, seeTD
,RSTD
,Utility
.
For more information, please refer to the homepage of this package: https://github.com/yuki-961004/binaryRL
Usage
run_m(
mode = c("simulate", "fit", "replay"),
data,
id,
n_params,
n_trials,
softmax = TRUE,
seed = 123,
initial_value = NA,
threshold = 1,
alpha = NA,
beta = NA,
gamma = 1,
eta,
epsilon = NA,
lambda = NA,
pi = 0.001,
tau = 1,
util_func = func_gamma,
rate_func = func_eta,
expl_func = func_epsilon,
bias_func = func_pi,
prob_func = func_tau,
sub = "Subject",
time_line = c("Block", "Trial"),
L_choice = "L_choice",
R_choice = "R_choice",
L_reward = "L_reward",
R_reward = "R_reward",
sub_choose = "Sub_Choose",
rob_choose = "Rob_Choose",
raw_cols = NULL,
var1 = NA,
var2 = NA,
digits_1 = 2,
digits_2 = 5
)
Arguments
mode |
[character] This parameter controls the function's operational mode. It has three possible values, each typically associated with a specific function:
In most cases, you won't need to modify this parameter directly, as suitable default values are set for different contexts. |
data |
[data.frame] This data should include the following mandatory columns:
|
id |
[integer] Which subject is going to be analyzed. The value should correspond to an entry in the "sub" column, which must contain the subject IDs.
|
n_params |
[integer] The number of free parameters in your model. |
n_trials |
[integer] The total number of trials in your experiment. |
softmax |
[logical] Whether to use the softmax function.
|
seed |
[integer] Random seed. This ensures that the results are reproducible and remain the same each time the function is run.
|
initial_value |
[numeric]
Subject's initial expected value for each stimulus's reward. If this value
is not set
|
threshold |
[integer]
Controls the initial exploration phase in the epsilon-first strategy.
This is the number of early trials where the subject makes purely random
choices, as they haven't yet learned the options' values. For example,
|
alpha |
[vector] Extra parameters that may be used in functions. |
beta |
[vector] Extra parameters that may be used in functions. |
gamma |
[vector] This parameter represents the exponent in utility functions, specifically:
|
eta |
[numeric]
Parameters used in the Learning Rate Function, The structure of
|
epsilon |
[numeric] A parameter used in the epsilon-greedy exploration strategy. It defines the probability of making a completely random choice, as opposed to choosing based on the relative values of the left and right options. For example, if 'epsilon = 0.1', the subject has a 10 choice and a 90 relevant when 'threshold' is at its default value (1) and 'lambda' is not set.
|
lambda |
[vector] A numeric value that controls the decay rate of exploration probability in the epsilon-decreasing strategy. A higher 'lambda' value means the probability of random choice will decrease more rapidly as the number of trials increases.
|
pi |
[vector]
Parameter used in the Upper-Confidence-Bound (UCB) action selection
formula. 'bias_func' controls the degree of exploration by scaling the
uncertainty bonus given to less-explored options. A larger value of
|
tau |
[vector] Parameters used in the Soft-Max Function. 'prob_func' representing the sensitivity of the subject to the value difference when making decisions. It determines the probability of selecting the left option versus the right option based on their values. A larger value of tau indicates greater sensitivity to the value difference between the options. In other words, even a small difference in value will make the subject more likely to choose the higher-value option.
|
util_func |
[function] Utility Function see |
rate_func |
[function] Learning Rate Function see |
expl_func |
[function] Exploration Strategy Function see |
bias_func |
[function] Upper-Confidence-Bound see |
prob_func |
[function] Soft-Max Function see |
sub |
[character] column name of subject ID
|
time_line |
[vector] A vector specifying the name of the column that the sequence of the experiment. This argument defines how the experiment is structured, such as whether it is organized by "Block" with breaks in between, and multiple trials within each block.
|
L_choice |
[character] Column name of left choice.
|
R_choice |
[character] Column name of right choice.
|
L_reward |
[character] Column name of the reward of left choice
|
R_reward |
[character] Column name of the reward of right choice
|
sub_choose |
[character] Column name of choices made by the subject.
|
rob_choose |
[character] Column name of choices made by the model, which you could ignore.
|
raw_cols |
[vector] Defaults to 'NULL'. If left as 'NULL', it will directly capture all column names from the raw data. |
var1 |
[character] Column name of extra variable 1. If your model uses more than just reward and expected value, and you need other information, such as whether the choice frame is Gain or Loss, then you can input the 'Frame' column as var1 into the model.
|
var2 |
[character] Column name of extra variable 2. If one additional variable, var1, does not meet your needs, you can add another additional variable, var2, into your model.
|
digits_1 |
[integer] The number of decimal places to retain for columns related to value function
|
digits_2 |
[integer] The number of decimal places to retain for columns related to select function.
|
Value
A list of class binaryRL
containing the results of the model fitting.
Examples
data <- binaryRL::Mason_2024_Exp1
binaryRL.res <- binaryRL::run_m(
mode = "fit",
data = data,
id = 18,
eta = c(0.321, 0.765),
n_params = 2,
n_trials = 360
)
summary(binaryRL.res)