Interpretable Machine Learning (iML) package
Interpretable ML (iML) is a set of data type objects, visualizations, and interfaces that can be used by any method designed to explain the predictions of machine learning models (or really the output of any function). It currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.
System | Target | Derivation | Build status |
---|---|---|---|
x86_64-linux | /gnu/store/qj6a9har21fa46yxgiki5vi6mv2r25hr-python-iml-0.6.2.drv | ||
mips64el-linux | /gnu/store/rnxlb2snp7a0r30cxd1ghmw0mgk9v64j-python-iml-0.6.2.drv | ||
i686-linux | /gnu/store/w1a1zpms9gpwv36jd159lm2876q70a7y-python-iml-0.6.2.drv | ||
armhf-linux | /gnu/store/4wz7d6qagh5gbnmcxa2p6m1fcq94ll8s-python-iml-0.6.2.drv | ||
aarch64-linux | /gnu/store/z9q4nkn7hncb797aydlpfdxpwpqp1bd7-python-iml-0.6.2.drv |
Linter | Message | Location |
---|---|---|
inputs-should-be-native Identify inputs that should be native inputs | 'python-nose' should probably be a native input |