Deep studying fashions usually require exact management over which parameters are up to date throughout coaching. In Keras, the non_trainable_weights
property helps handle such parameters effectively. Whether or not you are fine-tuning a pre-trained mannequin or implementing customized layers, understanding how you can use non_trainable_weights
accurately can enhance efficiency and adaptability.
Keras fashions and layers have two forms of weight attributes:
- Trainable Weights: These are up to date throughout backpropagation.
- Non-Trainable Weights: These stay fixed throughout coaching, helpful for storing fastened parameters like statistics in batch normalization.
The non_trainable_weights
property permits entry to those parameters, making certain they’re used with out being modified by gradient updates.
- Pre-trained Mannequin Positive-Tuning: Freezing layers to retain discovered options.
- Customized Layers: Defining stateful layers with fastened parameters.
- Effectivity: Decreasing the variety of trainable parameters optimizes coaching pace.
To entry non-trainable weights in a mannequin, use:
from tensorflow import keras# Load a pre-trained mannequin
base_model = keras.functions.MobileNetV2(weights='imagenet', include_top=False)
# Freeze all layers
for layer in base_model.layers:
layer.trainable = False
print("Non-trainable weights:", base_model.non_trainable_weights)
Right here, all layers are frozen, making their weights a part of non_trainable_weights
.
You’ll be able to outline a customized layer with fastened parameters:
import tensorflow as tfclass CustomLayer(keras.layers.Layer):
def __init__(self, **kwargs):
tremendous().__init__(**kwargs)
self.fixed_weight = self.add_weight(form=(1,), initializer="ones", trainable=False)
def name(self, inputs):
return inputs * self.fixed_weight
layer = CustomLayer()
print("Non-trainable weights:", layer.non_trainable_weights)
Right here, fixed_weight
stays unchanged throughout coaching.
Although they don’t seem to be up to date mechanically, non-trainable weights will be manually modified:
layer.fixed_weight.assign([2.0])
print("Up to date non-trainable weight:", layer.fixed_weight.numpy())
- Use for Frozen Layers: When fine-tuning pre-trained fashions, set
trainable = False
for layers. - Manually Replace When Wanted: If updates are required, assign values explicitly.
- Monitor Parameter Rely: Use
mannequin.abstract()
to verify trainable vs. non-trainable parameters.
Understanding and leveraging non_trainable_weights
in Keras is essential for optimizing deep studying workflows. Whether or not you are customizing layers or fine-tuning fashions, managing trainable and non-trainable weights can considerably improve mannequin effectivity.
Enthusiastic about mastering AI and deep studying? Take a look at my Udemy programs: Karthik K on Udemy
Share your ideas within the feedback! Have you ever used non_trainable_weights
earlier than? How did it affect your mannequin coaching?