interpretable machine learning


 Interpretable Machine learning has great potential for improving production processes and research. But computers usually do not explain their predictions which is a barrier to adopting interpretable machine learning. This is about making machine learning models and their decisions interpretive.


After examining the concepts of interpretation you will learn about simple machine learning and interpretable models such as decision trees, decision rules, and linear regression


Later focus on general-agnostic methods for the interpretability of machine learning models of black boxes as features of importance and cumulative local influences and explaining individual predictions with Shapley and LIME values of interpretable machine learning.


All methods of interpretation are explained in-depth and critically discussed. How do they work under the hood? What are their strengths/weaknesses? How can their outputs be interpreted? This will allow you to choose python and use the most appropriate interpretation method for your interpretable machine learning project.


They focus on interpretable machine learning models for tabular data (also called relative or structured data) and less on python computer vision tasks and natural language processing. 


Reading an article is recommended for the interpretability of machine learning practitioners, data scientists statisticians, and anyone interested in making machine learning models interpretive.


Importance of interpretability

The question some people often ask is why are we not content with just the results of the model and why are we so inclined to know why a particular decision was made? Much of this has to do with the impact a model has on the real world. 


Models designed to recommend only films will have a much lesser effect than those created to predict the outcome of a drug.


Permutation Importance

What features does the model know are important? Which features may have a greater impact on model forecasts than on others? This concept is called feature importance and value of value is a technique widely used to calculate feature importance. 


It helps us see when our model produces non-intuitive results and it helps to show others when our model works as we would hope.


The importance of value works in many ski kit-study estimates. The idea is to simply randomly replace or shuffle one column in the verification array and leave all the other columns intact. 


A feature is considered "important" if the accuracy of the model decreases a lot and causes an increase in error. On the other hand, an interpretable machine learning feature is considered 'not important' if a shuffle in its values ​​does not affect the accuracy of the model.

Post a Comment

Previous Post Next Post