How is SVD different from other matrix factorization techniques like Non negative Matrix Factorization?

svd
matrix_factorization

#1

I was going through various matrix factorization techniques such as SVD, Non negative matrix factorization. I understood that these techniques are used to reduce the dimensions of large datasets. Non negative matrix factorization only takes positive values as input while SVD can take both positive and negative values.

While deciding which technique to use for matrix factorization, is it the only criteria that we look for, i.e. if we only have positive values then we will use non negative matrix factorization and when we have combination of both positive and negative values we will use SVD? Or do we have some other criteria as well?

What is the difference between these two techniques and when do we use SVD?


#2

SVD and NMF are both matrix decomposition techniques but they are very different and are generally used for different purposes.

SVD helps in giving Eigen vectors of the input matrix. The technique is used generally where Eigen Vectors are of interest to us. PCA is one classic example.

NMF is another decomposition technique where we try to split the matrix R into the product of two matrices U and V. Our objective here is to get the two matrices U and V, so that their matrix multiplication gives us the matrix R. Here R is typically very very sparse, which is true in a lot of recommender systems. In such cases NMF works better as the missing-values assumption is inbuilt to the algo.

In case of SVD, it doesn’t assume anything about missing values. So you need to give some missing value imputation for SVD. This might bring in unnecessary noise. But if your ratings matrix is not too sparse, SVD might produce better results. SVD results are more deterministic compared to that of NMF. That also can play a role in which method you choose.


#3

Hi @dileep.31,

Thanks for your reply. It is very helpful.

I still have two doubts:

What are these eigen vectors and can you please give me an example or two of how Eigen Vectors can be helpful?

Can you please elaborate on this part?
That would be really helpful for me.


#4

Eigen Vectors are essentially properties of a matrix. When a matrix is multiplied with an eigen vector, only the magnitude of the eigen vector changes but not its direction. This is the simplest definition of an eigen vector.

SVD is popularly used in PCA (Principal Component Analysis) where eigen vectors are orthogonal to each other (a special case of SVD, as the input matrix is symmetric). In Linear Regression, we assume that there is no multicollinearity in features. But if they suffer from multicollinearity, Regression assumptions fail and the result would be instable. To fix that, we can transform the features using these Eigen Vectors to convert them into orthonormal features (i.e, each feature is orthogonal to every other feature, or they become ‘independent’). Once we transform the features using PCA, we can apply Linear Regression on the transformed features and not worry about multicollinearity.

Another popular case of using SVD is in LSI (Latent Semantic Indexing), a technique used in text document classification. LSI helps in understanding relation between different text documents.

LSI, PCA, Eigen Vectors, Multicollinearity are huge topics. I have done a lot of simplification here to give a brief description. You can Google to find more details about them.

SVD is a more ‘insightful’ factorization technique. NMF gives only U and V matrices, but SVD gives a Sigma matrix also along with these two.Sigma gives us insights into the amount of information each eigen vector holds. Such information is not available in NMF.

SVD computes Eigen vectors, which are deterministic for a given matrix. So they don’t change much based on what package you use or what initial conditions you have. NMF on the other hand tries to get best fit decomposed matrices, which are trained on some training data and then done an evaluation on test data. Based on technique used to get these smaller matrices you may end up with different results. You should also consider regularization in NMF while you don’t need to worry about it in SVD.


#5

Hi @dileep.31,

Thanks a lot. It is a great explanation of SVD as well as NMF.