I am getting some problem in ROC Curve diagnostic. Can you help?

auc
roc

#1

I have this ROC curve plot which i don’t find look correct .Since the curve is starting from (1,0), but still i’m getting AUC of .90 ,I think i’m making some mistake .Please suggest .

Thanks


#2

Hi Amar,

A ROC curve have Sensitivity as Y axis and (1 - Specificity) as X axis. I think you have made this curve wrong because your X axis stated Specificity instead of 1 - Specificity.

Hope this helps.

Regards,
Aayush Agrawal


#3

Aayush ,
but the problem is i’m using a pROC package to generate this . Can you please suggest me where i might have gone wrong ?


#4

Amar,

I think the curve is made right just how it’s represented through labels is causing this trouble. I think that’s how pROC package works. If you see when we make ROC curve we make it with 1- specificity on X axis but then origin starts with zero. Here they have made the same thing but instead of labeling it 1- specificity and origin with zero, they have given specificity with origin as one(Same thing!).

Conclusion: The curve is right and you have not made any mistakes. You can tweek the labels if you want it to be theoretically correct. Hope this helps.

Regards,
Aayush Agrawal


#5

Aayush ,
Thank you for the help . Now when i tried this with another package PROC . the result i got much intuitive

.And you were totally right about the labels. Thank you so much for your help

Regards
Amardeep


#6

Amar,
Here is link which will give you codes to find the AUC ROC values. https://www.kaggle.com/c/inria-bci-challenge/details/evaluation
The codes are both on Python and R. You can cross validate your numbers with these functions.

Tavish


#7

@amar a word of suggestion: when it comes to R, the “caret” package provides a one stoop solution to all the steps required for building a model, model evaluation included. This helps avoid referring to different packages during the process…you can check previous blogs on Analytics Vidhya for more info on caret


#8

Hi Tavish/Aayush,

I have this query on ROC/AUC. I have generated an ROC/AUC curve with a particular data predictions and have got ‘X’ value as AUC. Now I multiplied the data predictions by 2 and generated ROC/AUC curve for these new predictions and have got the same value ‘X’ as AUC. I have been trying to understand why ROC/AUC hasn’t changed and remained same?
I thought the since the numbers inflate by 2, so there would be a different tpr and fpr at each threshold, hence will result in a different AUC. Can one of you explain where is the gap in my understanding. Thanks in advance.


#9

@vajravi,

The AUC/ROC curve is independent of scaling and will remain the same if you rank and average it back to a scale too. This is because they are not plotting against P threshold, they are plotted with sensitivity vs specificity values which remains unchanged even when you multiply /divide/ add/ subtract a constant number. You can interpret it as ranking, if your ranking of values are not getting changed the ROC curve will remain the same.


#10

@Amar check out this post for more details on AUC-ROC curve :

Hope this helps.:slight_smile: