Close Menu
    Trending
    • Revisiting Benchmarking of Tabular Reinforcement Learning Methods
    • Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025
    • Qantas data breach to impact 6 million airline customers
    • He Went From $471K in Debt to Teaching Others How to Succeed
    • An Introduction to Remote Model Context Protocol Servers
    • Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025
    • AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?
    • Why Your Finance Team Needs an AI Strategy, Now
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Understanding non_trainable_weights in Keras: A Complete Guide with Examples | by Karthik Karunakaran, Ph.D. | Mar, 2025
    Machine Learning

    Understanding non_trainable_weights in Keras: A Complete Guide with Examples | by Karthik Karunakaran, Ph.D. | Mar, 2025

    Team_AIBS NewsBy Team_AIBS NewsMarch 23, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Deep studying fashions usually require exact management over which parameters are up to date throughout coaching. In Keras, the non_trainable_weights property helps handle such parameters effectively. Whether or not you are fine-tuning a pre-trained mannequin or implementing customized layers, understanding how you can use non_trainable_weights accurately can enhance efficiency and adaptability.

    Keras fashions and layers have two forms of weight attributes:

    • Trainable Weights: These are up to date throughout backpropagation.
    • Non-Trainable Weights: These stay fixed throughout coaching, helpful for storing fastened parameters like statistics in batch normalization.

    The non_trainable_weights property permits entry to those parameters, making certain they’re used with out being modified by gradient updates.

    • Pre-trained Mannequin Positive-Tuning: Freezing layers to retain discovered options.
    • Customized Layers: Defining stateful layers with fastened parameters.
    • Effectivity: Decreasing the variety of trainable parameters optimizes coaching pace.

    To entry non-trainable weights in a mannequin, use:

    from tensorflow import keras

    # Load a pre-trained mannequin
    base_model = keras.functions.MobileNetV2(weights='imagenet', include_top=False)

    # Freeze all layers
    for layer in base_model.layers:
    layer.trainable = False

    print("Non-trainable weights:", base_model.non_trainable_weights)

    Right here, all layers are frozen, making their weights a part of non_trainable_weights.

    You’ll be able to outline a customized layer with fastened parameters:

    import tensorflow as tf

    class CustomLayer(keras.layers.Layer):
    def __init__(self, **kwargs):
    tremendous().__init__(**kwargs)
    self.fixed_weight = self.add_weight(form=(1,), initializer="ones", trainable=False)

    def name(self, inputs):
    return inputs * self.fixed_weight

    layer = CustomLayer()
    print("Non-trainable weights:", layer.non_trainable_weights)

    Right here, fixed_weight stays unchanged throughout coaching.

    Although they don’t seem to be up to date mechanically, non-trainable weights will be manually modified:

    layer.fixed_weight.assign([2.0])
    print("Up to date non-trainable weight:", layer.fixed_weight.numpy())
    1. Use for Frozen Layers: When fine-tuning pre-trained fashions, set trainable = False for layers.
    2. Manually Replace When Wanted: If updates are required, assign values explicitly.
    3. Monitor Parameter Rely: Use mannequin.abstract() to verify trainable vs. non-trainable parameters.

    Understanding and leveraging non_trainable_weights in Keras is essential for optimizing deep studying workflows. Whether or not you are customizing layers or fine-tuning fashions, managing trainable and non-trainable weights can considerably improve mannequin effectivity.

    Enthusiastic about mastering AI and deep studying? Take a look at my Udemy programs: Karthik K on Udemy

    Share your ideas within the feedback! Have you ever used non_trainable_weights earlier than? How did it affect your mannequin coaching?



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleReinforcement Learning for Network Optimization
    Next Article Boogie Fland on How NIL is Changing the Path to the NBA
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025

    July 2, 2025
    Machine Learning

    Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025

    July 2, 2025
    Machine Learning

    From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Vatican Warns About the Risks of Artificial Intelligence

    January 29, 2025

    Clasificación de Sentimientos en Reseñas de Películas con Python y Naive Bayes | by Patynesa | Mar, 2025

    March 5, 2025

    Wjdhdj – الهام – Medium

    February 22, 2025
    Our Picks

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025

    Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025

    July 2, 2025

    Qantas data breach to impact 6 million airline customers

    July 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.