PIE {PIE}R Documentation

PIE: A Partially Interpretable Model with Black-box Refinement

Description

The PIE package implements a novel Partially Interpretable Model (PIE) framework introduced by Wang et al. <arxiv:2105.02410>. This framework jointly train an interpretable model and a black-box model to achieve high predictive performance as well as partial model transparency.

Functions

- predict.PIE(): Main function for generating predictions with the PIE model on dataset. - PIE(): Main function for training the PIE model with dataset. - data_process(): Process data into the format that can be used by PIE model. - sparsity_count(): Counts the number of features used in group lasso. - RPE(): Evaluate the RPE of a PIE model. - MAE(): Evaluate the MAE of a PIE model.

For more details, see the documentation for individual functions.

Author(s)

Maintainer: Jingyi Yang jy4057@stern.nyu.edu

Authors:


[Package PIE version 1.0.0 Index]