After I first began experimenting with neural networks, I bear in mind the frustration vividly.
Generally, my fashions would prepare fantastically, and different occasions, they’d grind to a halt — taking ceaselessly to be taught or fully blowing up, like baking a cake on the fallacious temperature.
Too sluggish, and nothing occurs. Too quick, and it’s a burnt mess. I spent numerous hours tinkering with settings, questioning why the identical configurations labored at some point however failed the subsequent.
For those who’ve ever felt this manner, you’re not alone. Coaching neural networks can really feel unpredictable, particularly for freshmen. The wrongdoer typically lies to find the appropriate “pace” in your mannequin to be taught.
That pace is ruled by one thing known as an optimizer — a device that adjusts the training course of.
Whereas widespread optimizers like Adam have been game-changers, in addition they include baggage: they’re advanced, memory-hungry, and never at all times simple to tune.
However what if there have been an easier means? Enter SGD-SaI (Stochastic Gradient Descent with Scaling at Initialization).