Ever since I began migrating to knowledge science I heard in regards to the well-known Bias versus Variance tradeoff.
However I realized it sufficient to maneuver on with my research and by no means regarded again an excessive amount of. I all the time knew {that a} extremely biased mannequin underfits the information, whereas a high-variance mannequin is overfitted, and that any of these will not be good when coaching an ML mannequin.
I additionally know that we must always search for a stability between each states, so we’ll have a good match or a mannequin that generalizes the sample nicely to new knowledge.
However I would say I by no means went farther than that. I by no means searched or created extremely biased or extremely variant fashions simply to see what they really do to the information and the way the predictions of these fashions are.
That’s till immediately, after all, as a result of that is precisely what we’re doing on this put up. Let’s proceed with some definitions.