I attended the Machine Learning Conference in NYC last month and was lucky enough to catch a presentation by Dan Mallinger. The talk focused on properly communicating models and methods to others regardless of their technical background. The base of this communication is in understanding the model and methods we are using so that we can present them to others.
One way of gaining a greater understanding of our model is through sensitivity analysis. Through this analysis we can visualize different feature interactions and the interaction between each feature and the response. I stumbled onto Marcus Beck’s blog post on sensitivity analysis for neural networks. His post provided great detail on how to perform the analysis as well as a function for neural net sensitivity analysis. Using his analysis on neural networks as a template, I created an example using a gradient boosted tree model. The biopsy data set provided in the MASS package was used to train the model via the Caret package.
After training our model the next step is to generate the quantile values for each feature.
Once the quantiles have been generated sensitivity analysis can begin. For each feature of the biopsy dataset the range of possible values is generated. The response matrix is created and will be used to store the quantile, the value of the feature tested, and the response of the model.
The next step is to create a test set for the model. The test set contains each value of a single feature and the quantile values for all other features.
Now I can run the model on each value of the feature while holding the quantile values for all other features at a constant rate.
After the responses from the model have been calculated I reshape and combine the results.
The last task is to plot the analysis results in a faceted plot. To help style the plots I used the fte_theme function written by Max Woolf with few slight modifications.