Can anyone help in Understanding the Bias-Variance Tradeoff for a machine learning model?



Recently, I have used GBM ,RF & SVM for a retail banking problem. I am unable to connect on the bias-variance for these models. Can anyone help in Understanding the Bias-Variance Tradeoff for a machine learning model?


This is messy, but I will try.
Bias = Expected Value - True Vaue
As sample size increases we hope that the our estimate, Expected Value, will be closer and closer to the True Value. If our modeling technique guarantees that behavior, we say that our estimates are unbiased. This is a very desirable attribute in most cases. So the linear model builder algorithms are designed to produce unbiased estimates.

However, Total variability = Variability due to Bias + Other Variability

So it is possible in some cases to reduce total variability further by using biased models. Most estimates of non linear theoretical models show this behavior. Many machine learning models take advantage of this by asking can I reduce total variability further by introducing small bias in my models.
Your mileage may vary.

Mukul Mehta
Cleveland, OH
Helping New product development become highly effective


Hey @jindalISB,

The bias-variance tradeoff is very nicely explained in this article. Read point 9.

Hope this helps!
Sanad :slight_smile:


Bias vs Variance. Will try to make an attempt to explain with an example. Forgive me if its totally south of confusion and even incorrect. But just trying to help.

Assume this:-Training Set based on Independent variables - Actual Dependent variable - 1,2,3, 9,55,101

Training Set Prediction:-
Model 1 - High Bias - Predicted values - 1,1.2,3.14, 7, 45, 66
Model 2 - High variance - 1, 2,3,8.6, 54.5, 100

Test Set - (actual values are 1,3,3,15,44,69,121)

Model 1 - High Bias - Predicted Values - 0.9, 1.3,3.1, 13, 59,110
Model 2 - High Variance - 1,1.7, 2, 19, 33, 50,97

Approximately Model 1 carried similar error between Actual vs Predicted in both training and Test set. In other words the model is generic with high variance and lower bias.

Model 2- Though was low on bias with closely predicted training set values, introduced very high variance in the the predicted score. The overfitting in training set due to high variance resulted in prediction accuracy taking a huge hit.

Impact of Bias - Model prediction accuracy remains the same and could be low to start with.
Impact of Variance - Model prediction accuracy takes a huge hit as the training set accuracy is misleading and test data will challenge prediction accuracy.

Aim is to reduce variability and reduce bias so we get a compromise

Lets assume Mode 3 is the Tradeoff mode.

The predicted values can look something like this.


Here the overall prediction is very close to actual value and that is the reason why you want to consider bias - variance tradeoff.

There is also another school of thought. Can i introduce bias by reclassifying certain categories as i have better functional knowledge thereby i help achieve a better, low variant model for future prediction very well knowing the bias exists.

I tried to use a simple example. People can build on it or make any corrections. Feel free to do so.


@vivekps @mmm257 @mohdsanadzakirizvi

Thanks guys for the help. I was able to understand this trade-off