ROC-AUC Curve For Comprehensive Analysis Of Machine Learning Models

In machine learning after we construct a mannequin for classification duties we don’t construct solely a single mannequin. We by no means depend on a single mannequin since we’ve got many alternative algorithms in machine studying that work in another way on completely different datasets. We all the time need to construct a mannequin that most accurately fits the respective knowledge set so we attempt constructing completely different fashions and eventually we select the perfect performing mannequin. For doing this comparability we can’t all the time depend on a metric like an accuracy score, the reason is for any imbalance knowledge set the mannequin will all the time predict the bulk class. But it turns into vital to examine whether or not the constructive class is predicted because the constructive and unfavorable class as unfavorable by the mannequin.

For this, we make use of Receiver Characteristics Curve – Area Under Curve that’s plotted between True constructive and False constructive charges. In this text, we’ll study extra in regards to the ROC-AUC curve and the way we make use of it to check completely different machine studying fashions to pick out the perfect performing mannequin. For this experiment, we’ll make use of Pima-Indian Diabetes that may be downloaded from Kaggle.

What we’ll study from this text? 



  1. What is the ROC-AUC Curve? How does it work?
  2. How to check the efficiency of various fashions utilizing the ROC-AUC curve? 
  1. What is the ROC-AUC Curve? How does it work?

It is a visualization graph that’s used to judge the efficiency of various machine studying fashions. This graph is plotted between true constructive and false constructive charges the place true constructive is completely constructive and false constructive is a complete unfavorable. The space below the curve (AUC) is the abstract of this curve that tells about how good a mannequin is after we speak about its capacity to generalize. If any mannequin captures extra AUC than different fashions then it’s thought of to be an excellent mannequin amongst all or we will conclude extra the AUC the higher mannequin will probably be classifying precise constructive and precise unfavorable. 

If the worth of AUC = 1 then the mannequin will probably be good whereas classifying the constructive class because the constructive and unfavorable class as unfavorable. If the worth of AUC = 0, then the mannequin is poor whereas classifying the identical. The mannequin will predict constructive as unfavorable and unfavorable as constructive. If the worth is 0.5 then the mannequin will battle to distinguish between constructive and unfavorable lessons. If it’s between 0.5 and 1 then there are extra probabilities that the mannequin will be capable of differentiate constructive class values from the unfavorable class values. 

  1. How to check the efficiency of various fashions utilizing the ROC-AUC curve? 

Let us now virtually perceive how we will plot this graph and examine completely different mannequin efficiency. We will first construct four completely different classification models utilizing completely different machine studying algorithms after which will plot the ROC-AUC graph to examine the perfect performing mannequin. We is not going to shortly import the required libraries and the iris knowledge set. Refer to the under code for a similar. 

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
import numpy as np
import pandas as pd
from sklearn import svm
from sklearn.metrics import roc_curve, auc
df = pd.read_csv('pima.csv')
print(df)
ROC-AUC Curve For Comprehensive Analysis Of Machine Learning Models

Now we’ll divide the dependent and impartial options X and y respectively adopted by splitting the info set into coaching and testing units. Use the under code for a similar. 

X = df.values[:,0:8]

Y = df.values[:,8]

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.50, random_state=1)

We have divided the info into coaching and testing now we’ll construct for various fashions for classifying the category i.e whether or not a affected person is diabetic or not. Use the under code to construct the respective fashions. 

clf1 = LogisticRegression()

clf2 = svm.SVC(kernel="linear", chance=True)

clf3 = RandomForestClassifier()

clf4 = DecisionTreeClassifier()

Since we’ve got outlined the 4 completely different classifiers now we’ll match the coaching knowledge over these and can predict chances for testing knowledge. Use the under code for a similar. 

probas1_ = clf1.match(X_train, y_train).predict_proba(X_test)

probas2_ = clf2.match(X_train, y_train).predict_proba(X_test)

See Also

ml-pipeline-mlops

probas3_ = clf3.match(X_train, y_train).predict_proba(X_test)

probas4_ = clf4.match(X_train, y_train).predict_proba(X_test)

Now we’ll compute the ROC curve and AUC rating for all these classifiers. Use the under code for a similar.

fp1, tp1, thresholds1 = roc_curve(y_test, probas1_[:, 1])
roc_auc_model1 = auc(fp1, tp1)
fp2, tp2, thresholds2 = roc_curve(y_test, probas2_[:, 1])
roc_auc_model2 = auc(fp2, tp2)
fp3, tp3, thresholds3 = roc_curve(y_test, probas3_[:, 1])
roc_auc_model3 = auc(fp3, tp3)
fp4, tp4, thresholds4 = roc_curve(y_test, probas4_[:, 1])
roc_auc_model4 = auc(fp4, tp4)
print("AUC for Logistic Regression Model : ",roc_auc_model1)
print("AUC for SVM Model:", roc_auc_model2)
print("AUC for Random Forest Model :" ,roc_auc_model3)
print("AUC for Decision Tree model :", roc_auc_model4)
ROC-AUC Curve For Comprehensive Analysis Of Machine Learning Models

Since we’ve got received the AUC rating now we’ll plot the roc curve to visualise the efficiency of all four fashions. Use the under code to do the identical. 

pl.clf()
pl.plot(fpr1, tpr1, label="Logistic Model (area = %0.2f)" % roc_auc1)
pl.plot(fpr2, tpr2, label="SVC Model (area = %0.2f)" % roc_auc2)
pl.plot(fpr3, tpr3, label="Random Forest Model (area = %0.2f)" % roc_auc3)
pl.plot(fpr4, tpr4, label="Decision Tree Model (area = %0.2f)" % roc_auc4)
pl.plot([0, 1], [0, 1], 'k--')
pl.xlim([0.0, 1.0])
pl.ylim([0.0, 1.0])
pl.xlabel('False Positive Rate')
pl.ylabel('True Positive Rate')
pl.title('Receiverrating attribute instance')
pl.legend(loc="lower right")
pl.present()

We can see from the above graph the svc mannequin captures the very best AUC and will be thought of as the perfect performing mannequin amongst all of the 4 fashions. This manner we will compute and examine completely different predictive fashions. We did this for binary classification, whereas if we need to do the identical for multi-class classification fashions we will once more do this. Consider we’ve got three lessons X, Y, and Z. So if we’re plotting the curve for X class then it could be finished as classification of X class in opposition to no different class i.e Y and Z. And equally for different lessons.

Conclusion 

In this text, we mentioned how we will examine completely different classification modes utilizing the ROC AUC curve. We first discover what a ROC AUC curve is and why it’s higher than an accuracy rating for evaluating completely different fashions. At final, we constructed four completely different classification fashions on the Pima Diabetes knowledge set and plotted the ROC-AUC curve to select the perfect performing mannequin. 

Do you need to understand how we will deploy this mannequin now? Check right here this text title as “Complete Tutorial On Tkinter to Deploy ML Models”. 


If you really liked this story, do be part of our Telegram Community.


Also, you’ll be able to write for us and be one of many 500+ specialists who’ve contributed tales at AIM. Share your nominations here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here