opl_dt_c {OPL} | R Documentation |
Optimal Policy Learning with Decision Tree
Description
Implementing ex-ante treatment assignment using as policy class a 2-layer fixed-depth decision-tree at specific splitting variables and threshold values.
Usage
opl_dt_c(make_cate_result, z, w, c1 = NA, c2 = NA, c3 = NA, verbose = TRUE)
Arguments
make_cate_result |
A data frame resulting from the |
z |
A character vector containing the names of the variables used for treatment assignment. |
w |
A string representing the treatment indicator variable name. |
c1 |
Value of the threshold value c1 for the first splitting variable. This number must be chosen between 0 and 1. |
c2 |
Value of the threshold value c2 for the second splitting variable. This number must be chosen between 0 and 1. |
c3 |
Value of the threshold value c3 for the third splitting variable. This number must be chosen between 0 and 1. |
verbose |
Set TRUE to print the output on the console. |
Value
A list containing:
-
W_opt_constr
: The maximum average constrained welfare. -
W_opt_unconstr
: The average unconstrained welfare. -
units_to_be_treated
: A data frame of the units to be treated based on the optimal policy. A plot showing the optimal policy assignment.
References
Athey, S., and Wager S. 2021. Policy Learning with Observational Data, Econometrica, 89, 1, 133–161.
Cerulli, G. 2021. Improving econometric prediction by machine learning, Applied Economics Letters, 28, 16, 1419-1425.
Cerulli, G. 2022. Optimal treatment assignment of a threshold-based policy: empirical protocol and related issues, Applied Economics Letters, DOI: 10.1080/13504851.2022.2032577.
Gareth, J., Witten, D., Hastie, D.T., Tibshirani, R. 2013. An Introduction to Statistical Learning : with Applications in R. New York, Springer.
Kitagawa, T., and A. Tetenov. 2018. Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice, Econometrica, 86, 2, 591–616.