I have this ROC curve plot which i don’t find look correct .Since the curve is starting from (1,0), but still i’m getting AUC of .90 ,I think i’m making some mistake .Please suggest .

Thanks

# I am getting some problem in ROC Curve diagnostic. Can you help?

**Amar**#1

**aayushmnit**#2

Hi Amar,

A ROC curve have Sensitivity as Y axis and (1 - Specificity) as X axis. I think you have made this curve wrong because your X axis stated Specificity instead of 1 - Specificity.

Hope this helps.

Regards,

Aayush Agrawal

**Amar**#3

Aayush ,

but the problem is i’m using a pROC package to generate this . Can you please suggest me where i might have gone wrong ?

**aayushmnit**#4

Amar,

I think the curve is made right just how it’s represented through labels is causing this trouble. I think that’s how pROC package works. If you see when we make ROC curve we make it with 1- specificity on X axis but then origin starts with zero. Here they have made the same thing but instead of labeling it 1- specificity and origin with zero, they have given specificity with origin as one(Same thing!).

Conclusion: The curve is right and you have not made any mistakes. You can tweek the labels if you want it to be theoretically correct. Hope this helps.

Regards,

Aayush Agrawal

**Amar**#5

Aayush ,

Thank you for the help . Now when i tried this with another package PROC . the result i got much intuitive

Regards

Amardeep

Amar,

Here is link which will give you codes to find the AUC ROC values. https://www.kaggle.com/c/inria-bci-challenge/details/evaluation

The codes are both on Python and R. You can cross validate your numbers with these functions.

Tavish

@amar a word of suggestion: when it comes to R, the “caret” package provides a one stoop solution to all the steps required for building a model, model evaluation included. This helps avoid referring to different packages during the process…you can check previous blogs on Analytics Vidhya for more info on caret

**vajravi**#8

Hi Tavish/Aayush,

I have this query on ROC/AUC. I have generated an ROC/AUC curve with a particular data predictions and have got ‘X’ value as AUC. Now I multiplied the data predictions by 2 and generated ROC/AUC curve for these new predictions and have got the same value ‘X’ as AUC. I have been trying to understand why ROC/AUC hasn’t changed and remained same?

I thought the since the numbers inflate by 2, so there would be a different tpr and fpr at each threshold, hence will result in a different AUC. Can one of you explain where is the gap in my understanding. Thanks in advance.

**aayushmnit**#9

The AUC/ROC curve is independent of scaling and will remain the same if you rank and average it back to a scale too. This is because they are not plotting against P threshold, they are plotted with sensitivity vs specificity values which remains unchanged even when you multiply /divide/ add/ subtract a constant number. You can interpret it as ranking, if your ranking of values are not getting changed the ROC curve will remain the same.

**lopamudra1**#11

Please help me to get ROC value. I have the code

```
library(caret)
pt<-read.csv("C:/Documents and Settings/wine_df")
it the Wine Dataset in to train and test datasetsR
set.seed(3033)
intrain <- createDataPartition(y = wine_df$V1, p= 0.7, list = FALSE)
training <- wine_df[intrain,]
testing <- wine_df[-intrain,]
trctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 3)
set.seed(3333)
knn_fit <- train(V1 ~., data = training, method = "knn",
trControl=trctrl,
preProcess = c("center", "scale"),
tuneLength = 10)
test_pred <- predict(knn_fit, newdata = testing)
test_pred
confusionMatrix(test_pred,testing$class)
```