AIC.gkwreg {gkwreg} | R Documentation |
Akaike's Information Criterion for GKw Regression Models
Description
Calculates the Akaike Information Criterion (AIC) for one or more fitted
Generalized Kumaraswamy (GKw) regression model objects (class "gkwreg"
).
AIC is commonly used for model selection, penalizing model complexity.
Usage
## S3 method for class 'gkwreg'
AIC(object, ..., k = 2)
Arguments
object |
An object of class |
... |
Optionally, one or more additional fitted model objects of class
|
k |
Numeric, the penalty per parameter. The default |
Details
The AIC is calculated based on the maximized log-likelihood (L
) and the
number of estimated parameters (p
) in the model:
AIC = -2 \log(L) + k \times p
This function retrieves the log-likelihood and the number of parameters (df
)
using the logLik.gkwreg
method for the fitted gkwreg
object(s).
Models with lower AIC values are generally preferred, as they indicate a better
balance between goodness of fit and model parsimony.
When comparing multiple models passed via ...
, the function relies on
AIC
's default method for creating a comparison table,
which in turn calls logLik
for each provided object.
For small sample sizes relative to the number of parameters, the second-order AIC (AICc) might be more appropriate:
AICc = AIC + \frac{2p(p+1)}{n-p-1}
where n
is the number of observations. AICc is not directly computed by
this function but can be calculated manually using the returned AIC, p
(from attr(logLik(object), "df")
), and n
(from attr(logLik(object), "nobs")
).
Value
If just one object
is provided, returns a single numeric AIC value.
If multiple objects are provided via ...
, returns a data.frame
with rows corresponding to the models and columns for the degrees of freedom
(df
) and the AIC values, sorted by AIC.
Author(s)
Lopes, J. E.
References
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716-723.
Burnham, K. P., & Anderson, D. R. (2002). Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.). Springer-Verlag.
See Also
gkwreg
, logLik.gkwreg
, BIC.gkwreg
,
AIC
Examples
# Assume 'df' exists with response 'y' and predictors 'x1', 'x2', 'x3'
# and that rkw() is available and data is appropriate (0 < y < 1).
set.seed(123)
n <- 100
x1 <- runif(n)
x2 <- rnorm(n)
x3 <- factor(rbinom(n, 1, 0.4))
alpha <- exp(0.5 + 0.2 * x1)
beta <- exp(1.0 - 0.1 * x2 + 0.3 * (x3 == "1"))
y <- rkw(n, alpha = alpha, beta = beta) # Placeholder if rkw not available
y <- pmax(pmin(y, 1 - 1e-7), 1e-7)
df <- data.frame(y = y, x1 = x1, x2 = x2, x3 = x3)
# Fit two competing models
kw_reg1 <- gkwreg(y ~ x1 | x2, data = df, family = "kw")
kw_reg2 <- gkwreg(y ~ x1 | x2 + x3, data = df, family = "kw") # More complex beta model
kw_reg3 <- gkwreg(y ~ 1 | x2 + x3, data = df, family = "kw") # Simpler alpha model
# Calculate AIC for a single model
aic1 <- AIC(kw_reg1)
print(aic1)
# Compare models using AIC (lower is better)
model_comparison_aic <- c(AIC(kw_reg1), AIC(kw_reg2), AIC(kw_reg3))
print(model_comparison_aic)
# Calculate AIC with a different penalty (e.g., k=4)
aic1_k4 <- AIC(kw_reg1, k = 4)
print(aic1_k4)