I am getting some problem in ROC Curve diagnostic. Can you help?



I have this ROC curve plot which i don’t find look correct .Since the curve is starting from (1,0), but still i’m getting AUC of .90 ,I think i’m making some mistake .Please suggest .



Hi Amar,

A ROC curve have Sensitivity as Y axis and (1 - Specificity) as X axis. I think you have made this curve wrong because your X axis stated Specificity instead of 1 - Specificity.

Hope this helps.

Aayush Agrawal


Aayush ,
but the problem is i’m using a pROC package to generate this . Can you please suggest me where i might have gone wrong ?



I think the curve is made right just how it’s represented through labels is causing this trouble. I think that’s how pROC package works. If you see when we make ROC curve we make it with 1- specificity on X axis but then origin starts with zero. Here they have made the same thing but instead of labeling it 1- specificity and origin with zero, they have given specificity with origin as one(Same thing!).

Conclusion: The curve is right and you have not made any mistakes. You can tweek the labels if you want it to be theoretically correct. Hope this helps.

Aayush Agrawal


Aayush ,
Thank you for the help . Now when i tried this with another package PROC . the result i got much intuitive

.And you were totally right about the labels. Thank you so much for your help



Here is link which will give you codes to find the AUC ROC values. https://www.kaggle.com/c/inria-bci-challenge/details/evaluation
The codes are both on Python and R. You can cross validate your numbers with these functions.



@amar a word of suggestion: when it comes to R, the “caret” package provides a one stoop solution to all the steps required for building a model, model evaluation included. This helps avoid referring to different packages during the process…you can check previous blogs on Analytics Vidhya for more info on caret


Hi Tavish/Aayush,

I have this query on ROC/AUC. I have generated an ROC/AUC curve with a particular data predictions and have got ‘X’ value as AUC. Now I multiplied the data predictions by 2 and generated ROC/AUC curve for these new predictions and have got the same value ‘X’ as AUC. I have been trying to understand why ROC/AUC hasn’t changed and remained same?
I thought the since the numbers inflate by 2, so there would be a different tpr and fpr at each threshold, hence will result in a different AUC. Can one of you explain where is the gap in my understanding. Thanks in advance.



The AUC/ROC curve is independent of scaling and will remain the same if you rank and average it back to a scale too. This is because they are not plotting against P threshold, they are plotted with sensitivity vs specificity values which remains unchanged even when you multiply /divide/ add/ subtract a constant number. You can interpret it as ranking, if your ranking of values are not getting changed the ROC curve will remain the same.


@Amar check out this post for more details on AUC-ROC curve :

Hope this helps.:slight_smile:


Please help me to get ROC value. I have the code

pt<-read.csv("C:/Documents and Settings/wine_df")
it the Wine Dataset in to train and test datasetsR

intrain <- createDataPartition(y = wine_df$V1, p= 0.7, list = FALSE)
training <- wine_df[intrain,]
testing <- wine_df[-intrain,]
trctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 3)
knn_fit <- train(V1 ~., data = training, method = "knn",
 preProcess = c("center", "scale"),
 tuneLength = 10)

 test_pred <- predict(knn_fit, newdata = testing)


@lopamudra1 use this code :point_down:


auc(roc(testing$class, test_pred)) # AUC score


will you please help me to understand category and prediction in terams of my code?


@lopamudra1 again refer my code…I have made some changes.