Skip to content

developerMehul755/Model_Interpretability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

Model_Interpretability

In this project, i have implemented LIME (local interpretable Model agnostic Explanation) from scratch in pytorch. Lime is a part of an emerging field of explainable AI field. This is a model agnostic approach, which means it doesn't depend on model type, which makes LIME more powerful for model interpretation. Lime works locally, which means it doesn't give the explanation of the model globally, but for each point. It's starts by converting the target data point into an interpretable vector and then in that new vector space, it randomly samples some amount of vectors and convert them back into the original space. Now our desired model will predict the output on those new sampled vectors. We use that output and our sampled interpretable vectors to fit a linear model on them, and then using that linear model, we define out explanation.

About

In this project i have implemented LIME (local interpretable Model agnostic Explanation) from scratch in pytorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors