fine_tuning {MLwrap} | R Documentation |
Fine Tune ML Model
Description
The fine_tuning() function performs automated hyperparameter optimization for ML workflows encapsulated within an AnalysisObject. It supports different tuning strategies, such as Bayesian Optimization and Grid Search Cross-Validation, allowing the user to specify evaluation metrics and whether to visualize tuning results. The function first validates arguments and updates the workflow and metric settings within the AnalysisObject. If hyperparameter tuning is enabled, it executes the selected tuning procedure, identifies the best hyperparameter configuration based on the specified metrics, and updates the workflow accordingly. For neural network models, it also manages the creation and integration of new model instances and provides additional visualization of training dynamics. Finally, the function fits the optimized model to the training data and updates the AnalysisObject, ensuring a reproducible and efficient model selection process (Bartz et al., 2023).
Usage
fine_tuning(analysis_object, tuner, metrics = NULL, verbose = FALSE)
Arguments
analysis_object |
analysis_object created from build_model function. |
tuner |
Name of the Hyperparameter Tuner. A string of the tuner name: "Bayesian Optimization" or "Grid Search CV". |
metrics |
Metric used for Model Selection. A string of the name of metric (see Metrics). By default either "rmse" (regression) or "roc_auc" (classification). |
verbose |
Whether to show tuning process. Boolean TRUE or FALSE (default). |
Value
An updated analysis_object containing the fitted model with optimized hyperparameters, the tuning results, and all relevant workflow modifications. This object includes the final trained model, the best hyperparameter configuration, tuning diagnostics, and, if applicable, plots of the tuning process. It can be used for further model evaluation, prediction, or downstream analysis within the package workflow.
Tuners
Bayesian Optimization
Initial data points: 20
Maximum number of iterations: 25
Convergence after 5 iterations without improvement
Train / Validation / Test : 0.6 / 0.2 / 0.2
Grid Search CV
Number of Folds: 5
Maximum levels per hyperparameter: 10
Train / Test : 0.75 / 0.25
Metrics
Regression Metrics
rmse
mae
mpe
mape
ccc
smape
rpiq
rsq
Classification Metrics
accuracy
bal_accuracy
recall
sensitivity
specificity
kap
f_meas
mcc
j_index
detection_prevelance
roc_auc
pr_auc
gain_capture
brier_class
roc_aunp
References
Bartz, E., Bartz-Beielstein, T., Zaefferer, M., & Mersmann, O. (2023). Hyperparameter tuner for Machine and Deep Learning with R. A Practical Guide. Springer, Singapore. https://doi.org/10.1007/978-981-19-5170-1
Examples
# Example 1: Fine tuning function applied to a regression task
library(MLwrap)
data(sim_data) # sim_data is a simulated dataset wtih psychological variables
wrap_object <- preprocessing(
df = sim_data,
formula = psych_well ~ depression + emot_intel + resilience + life_sat,
task = "regression"
)
wrap_object <- build_model(
analysis_object = wrap_object,
model_name = "Random Forest",
hyperparameters = list(
mtry = 3,
trees = 50
)
)
wrap_object <- fine_tuning(wrap_object,
tuner = "Grid Search CV",
metrics = c("rmse")
)
# Extracting Evaluation Results
table_best_hyp <- table_best_hyperparameters(wrap_object)
table_results <- table_evaluation_results(wrap_object)
# Plotting Results
wrap_object |>
plot_tuning_results() |>
plot_residuals_distribution() |>
plot_scatter_residuals()