How to reduce the support vector in SVM?

# How to reduce the support vector in SVM

**palbha**#2

please see the below link , might help (not sure though)

https://www.mathworks.com/help/stats/classificationsvm.compact.html

**palbha**#4

Hi Shrikant , as far as i know sklearn does not have something like this , the only suggestion one can give is try scaling the data and use linear kernels , may be u can try pca and then svm , if the number of columns/features is high .

**shrikant_1**#5

Hi palbha,

This question was asked in the interview, that time I was not able to tell how to reduce support vector.Whatever suggestion you have given it is good for practice then we can analyse how it works. I will do practice later I will give you feedback whatever I get results.

I have one document for how to reduce support vector. Please have a look. what is your observation?SCA.pdf (430.1 KB)

Thanks a lot

**palbha**#6

Thanks Shrikant , will also try to see if I ca find anything more intutive and will keep you posted

**equbal49**#7

You can use n_supports attributes in sklearn svc functions to reduce/increase the support vectors

**ali.muttaleb**#9

Hello @shrikant_1 As far as i know does not have something like you post right now even i used the SVM before to classified the questions , I suggestion for you if can give you to try this scaling the data and use linear kernels or Gussean Liner. as I know before the SVM not easy to reduce or make results from it, this is so complex technique i tried before even i got some problems with it. please try also rapidMiner as you can see the result from it also. All the best for you

**srinivas1008**#10

SVM has the hypertuning paramaters that can fit with the best support vectors. There is no way to reduce or increase the support vectors directly. For eg in Python, lets take

svm_result = svm.SVC(kernel=‘rbf’, C=best_C, gamma=best_gamma), kernel can be ‘linear’ or ‘rbf’, C can be 1,10,100 etc… gamma can be 0.1,0.01,0.001 etc, which ever combination gives you the best output interms of accuracy or precision or F1-score etc can be considered for the model for that dataset. for the above svm.SVC() paramaters, there are others which take default values if not mentioned. for the above, the values are:

SVC(C=1, cache_size=200, class_weight=None, coef0=0.0,

decision_function_shape=‘ovr’, degree=3, gamma=0.001, kernel=‘rbf’,

max_iter=-1, probability=False, random_state=None, shrinking=True,

tol=0.001, verbose=False)

you can find a reading material for this on the net…