compare_foot {footBayes}R Documentation

Compare Football Models using Various Metrics

Description

Compares multiple football models or directly provided probability matrices based on specified metrics (accuracy, Brier score, ranked probability score, Pseudo R^2, average coverage probability), using a test dataset. Additionally, computes the confusion matrices. The function returns an object of class compareFoot.

Usage

compare_foot(
  source,
  test_data,
  metric = c("accuracy", "brier", "ACP", "pseudoR2", "RPS"),
  conf_matrix = FALSE
)

Arguments

source

A named list containing either:

  • Fitted model objects (of class stanFoot or stanfit), each representing a football model.

  • Matrices where each matrix contains the estimated probabilities for "Home Win," "Draw," and "Away Win" in its columns.

test_data

A data frame containing the test dataset, with columns:

  • home_team: Home team's name (character string).

  • away_team: Away team's name (character string).

  • home_goals: Goals scored by the home team (integer >= 0).

  • away_goals: Goals scored by the away team (integer >= 0).

metric

A character vector specifying the metrics to use for comparison. Options are:

  • "accuracy": Computes the accuracy of each model.

  • "brier": Computes the Brier score of each model.

  • "RPS": Computes the ranked probability score (RPS) for each model.

  • "ACP": Computes the average coverage probability (ACP) for each model.

  • "pseudoR2": Computes the Pseudo R^2, defined as the geometric mean of the probabilities assigned to the actual results.

Default is c("accuracy", "brier", "ACP", "pseudoR2", "RPS"), computing the specified metrics.

conf_matrix

A logical value indicating whether to generate a confusion matrix comparing predicted outcomes against actual outcomes for each model or probability matrix. Default is FALSE.

Details

The function extracts predictions from each model or directly uses the provided probability matrices and computes the chosen metrics on the test dataset. It also possible to compute confusion matrices.

Value

An object of class compare_foot_output, which is a list containing:

Author(s)

Roberto Macrì Demartino roberto.macridemartino@phd.unipd.it

Examples

## Not run: 
library(dplyr)

data("italy")
italy_2000 <- italy %>%
  dplyr::select(Season, home, visitor, hgoal, vgoal) %>%
  dplyr::filter(Season == "2000")

colnames(italy_2000) <- c("periods", "home_team", "away_team", "home_goals", "away_goals")

# Example with fitted models
fit_1 <- stan_foot(data = italy_2000,
                   model = "double_pois", predict = 18)  # Double Poisson model
fit_2 <- stan_foot(data = italy_2000,
                   model = "biv_pois", predict = 18)     # Bivariate Poisson model

italy_2000_test <- italy_2000[289:306, ]


compare_results_models <- compare_foot(
  source = list(double_poisson = fit_1,
                bivariate_poisson = fit_2),
  test_data = italy_2000_test,
  metric = c("accuracy", "brier", "ACP", "pseudoR2", "RPS"),
  conf_matrix = TRUE
)

print(compare_results_models)

## End(Not run)

[Package footBayes version 1.0.0 Index]