Improving Specificity or Sensitivity in linear discriminant analysis

r
data_science
machinelearning

#1

Is there a way to improve the specificity/sensitivity for a linear discriminant analysis like we do in a logistic model by changing the threshold of the classification.

I am doing it in r
mydata=read.csv(“weather.csv”)

##[Link to download the file = https://www.biz.uiowa.edu/faculty/jledolter/DataMining/dataexercises.html]###

####Missing value Imputation using kNN

install.packages(“VIM”, dependencies = TRUE)
library(VIM)

#Which variables have missing values
colnames(mydata)[colSums(is.na(mydata)) > 0]
mydata_imputed=kNN(mydata,variable = colnames(mydata)[colSums(is.na(mydata)) > 0],k=5)
colSums(is.na(mydata_imputed))

mydata_imputed=mydata_imputed[,1:24]

library(caret)
set.seed(1234)
Index=createDataPartition(mydata_imputed$RainTomorrow,p=0.75,list = FALSE)
Train=mydata_imputed[Index,]
Test=mydata_imputed[-Index,]

library(MASS)
LDAModel1=lda(RainTomorrow~., data = Train[,-c(1:2)])
LDAModel1

Pred=predict(LDAModel1, Test)

CM=confusionMatrix(Pred$class,Test$RainTomorrow)
fourfoldplot(CM$table)
Acc_LDA=CM$overall[[1]]
Acc_LDA
Sensitivity_LDA=CM$byClass[[1]]
Sensitivity_LDA
Specificity_LDA=CM$byClass[[2]]
Specificity_LDA
###Specificity needs to be improved