Close Menu
    Trending
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»What PyTorch Really Means by a Leaf Tensor and Its Grad
    Artificial Intelligence

    What PyTorch Really Means by a Leaf Tensor and Its Grad

    Team_AIBS NewsBy Team_AIBS NewsJune 20, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    isn’t yet one more rationalization of the chain rule. It’s a tour by way of the weird facet of autograd — the place gradients serve physics, not simply weights

    I initially wrote this tutorial for myself throughout the first yr of my PhD, whereas navigating the intricacies of gradient calculations in PyTorch. Most of it’s clearly designed with normal backpropagation in thoughts — and that’s nice, since that’s what most individuals want.

    However Physics-Knowledgeable Neural Community (PINN) is a moody beast and it wants a special sort of gradient logic. I spent a while feeding it and I figured it is perhaps price sharing the findings with the group, particularly with fellow PINN practitioners — perhaps it’ll save somebody a couple of complications. However you probably have by no means heard of PINNs, don’t fear! This publish continues to be for you — particularly should you’re into issues like gradients of gradients and all that enjoyable stuff.

    Fundamentals phrases

    Tensor within the pc world means merely a multidimensional array, i.e. a bunch of numbers listed by a number of integers. To be exact, there exist additionally zero-dimensional tensors, that are simply single numbers. Some individuals say that tensors are a generalization of matrices to greater than two dimensions.

    When you’ve got studied normal relativity earlier than, you’ll have heard that mathematical tensors have things like covariant and contravariant indices. However neglect about it — in PyTorch tensors are simply multidimensional arrays. No finesse right here.

    Leaf tensor is a tensor that may be a leaf (within the sense of a graph idea) of a computation graph. We’ll have a look at these under, so this definition will make a bit extra sense.

    The requires_grad property of a tensor tells PyTorch whether or not it ought to bear in mind how this tensor is utilized in additional computations. For now, consider tensors with requires_grad=True as variables, whereas tensors with requires_grad=False as constants.

    Leaf tensors

    Let’s begin by creating a couple of tensors and checking their properties requires_grad and is_leaf.

    import torch
    
    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    
    c = torch.tensor([5.])
    d = c * c
    
    assert a.requires_grad is True and a.is_leaf is True
    assert b.requires_grad is True and b.is_leaf is False
    assert c.requires_grad is False and c.is_leaf is True
    assert d.requires_grad is False and d.is_leaf is True  # sic!
    del a, b, c, d

    a is a leaf as anticipated, and b will not be as a result of it’s a results of a multiplication. a is ready to require grad, so naturally b inherits this property.

    c is a leaf clearly, however why d is a leaf? The rationale d.is_leaf is True stems from a selected conference: all tensors with requires_grad set to False are thought of leaf tensors, as per PyTorch’s documentation:

    All Tensors which have requires_grad which is False will probably be leaf Tensors by conference.

    Whereas mathematically, d will not be a leaf (because it outcomes from one other operation, c * c), gradient computation won’t ever prolong past it. In different phrases, there gained’t be any by-product with respect to c. This permits d to be handled as a leaf.

    In a nutshell, in PyTorch, leaf tensors are both:

    • Instantly inputted (i.e. not calculated from different tensors) and have requires_grad=True. Instance: neural community weights which might be randomly initialized.
    • Don’t require gradients in any respect, no matter whether or not they’re straight inputted or computed. Within the eyes of autograd, these are simply constants. Examples:
      • any neural community enter knowledge,
      • an enter picture after imply elimination or different operations, which entails solely non-gradient-requiring tensors.

    A small comment for many who need to know extra. The requires_grad property is inherited as illustrated right here:

    a = torch.tensor([5.], requires_grad=True)
    b = torch.tensor([5.], requires_grad=True)
    c = torch.tensor([5.], requires_grad=False)
    
    d = torch.sin(a * b * c)
    
    assert d.requires_grad == any((x.requires_grad for x in (a, b, c)))

    Code comment: all code snippets needs to be self-contained aside from imports that I embody solely once they seem first time. I drop them with the intention to reduce boilerplate code. I belief that the reader will have the ability to handle these simply.

    Grad retention

    A separate difficulty is gradient retention. All nodes within the computation graph, that means all tensors used, have gradients computed in the event that they require grad. Nonetheless, solely leaf tensors retain these gradients. This is smart as a result of gradients are sometimes used to replace tensors, and solely leaf tensors are topic to updates throughout coaching. Non-leaf tensors, like b within the first instance, should not straight up to date; they alter because of adjustments in a, so their gradients could be discarded. Nonetheless, there are situations, particularly in Physics-Knowledgeable Neural Networks (PINNs), the place you may need to retain the gradients of those intermediate tensors. In such circumstances, you will want to explicitly mark non-leaf tensors to retain their gradients. Let’s see:

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.backward()
    
    assert a.grad will not be None
    assert b.grad is None  # generates a warning

    You in all probability have simply seen a warning:

    UserWarning: The .grad attribute of a Tensor that isn't a leaf Tensor is being 
    accessed. Its .grad attribute will not be populated throughout autograd.backward(). 
    If you happen to certainly need the .grad discipline to be populated for a non-leaf Tensor, use 
    .retain_grad() on the non-leaf Tensor. If you happen to entry the non-leaf Tensor by 
    mistake, be sure you entry the leaf Tensor as a substitute. 
    See github.com/pytorch/pytorch/pull/30531 for extra informations. 
    (Triggered internally at atensrcATen/core/TensorBody.h:491.)

    So let’s repair it by forcing b to retain its gradient

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()  # <- the distinction
    b.backward()
    
    assert a.grad will not be None
    assert b.grad will not be None

    Mysteries of grad

    Now let’s have a look at the well-known grad itself. What’s it? Is it a tensor? In that case, is it a leaf tensor? Does it require or retain grad?

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward()
    
    assert isinstance(a.grad, torch.Tensor)
    assert a.grad.requires_grad is False and a.grad.retains_grad is False and a.grad.is_leaf is True
    assert b.grad.requires_grad is False and b.grad.retains_grad is False and b.grad.is_leaf is True

    Apparently:

    – grad itself is a tensor,
    – grad is a leaf tensor,
    – grad doesn’t require grad.

    Does it retain grad? This query doesn’t make sense as a result of it doesn’t require grad within the first place. We’ll come again to the query of the grad being a leaf tensor in a second, however now we’ll check a couple of issues.

    A number of backwards and retain_graph

    What’s going to occur once we calculate the identical grad twice?

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward()
    strive:
        b.backward()
    besides RuntimeError:
        """
        RuntimeError: Attempting to backward by way of the graph a second time (or 
        straight entry saved tensors after they've already been freed). Saved 
        intermediate values of the graph are freed if you name .backward() or 
        autograd.grad(). Specify retain_graph=True if you could backward by way of 
        the graph a second time or if you could entry saved tensors after 
        calling backward.
        """

    The error message explains all of it. This could work:

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    
    b.backward(retain_graph=True)
    print(a.grad)  # prints tensor([6.])
    
    b.backward(retain_graph=True)
    print(a.grad)  # prints tensor([12.])
    
    b.backward(retain_graph=False)
    print(a.grad)  # prints tensor([18.])
    
    # b.backward(retain_graph=False)  # <- right here we'd get an error, as a result of in 
    # the earlier name we didn't retain the graph.

    Facet (however vital) notice: you can even observe, how the gradient accumulates in a: with each iteration it’s added.

    Highly effective create_graph argument

    The way to make grad require grad?

    a = torch.tensor([5.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward(create_graph=True)
    
    # Right here an attention-grabbing factor occurs: now a.grad would require grad! 
    assert a.grad.requires_grad is True
    assert a.grad.is_leaf is False
    
    # Then again, the grad of b doesn't require grad, as beforehand. 
    assert b.grad.requires_grad is False
    assert b.grad.is_leaf is True

    The above may be very helpful: a.grad which mathematically is [frac{partial b}{partial a}] will not be a relentless (leaf) anymore, however an everyday member of the computation graph that may be additional used. We’ll use that reality in Half 2.

    Why the b.grad doesn’t require grad? As a result of by-product of b with respect to b is solely 1.

    If the backward feels counterintuitive for you now, don’t fear. We’ll quickly change to a different technique referred to as nomen omen grad that permits to exactly select components of the derivatives. Earlier than, two facet notes:

    Facet notice 1: If you happen to set create_graph to True, it additionally units retain_graph to True (if not explicitly set). Within the pytorch code it seems precisely like
    this:

        if retain_graph is None:
            retain_graph = create_graph

    Facet notice 2: You in all probability noticed a warning like this:

        UserWarning: Utilizing backward() with create_graph=True will create a reference 
        cycle between the parameter and its gradient which may trigger a reminiscence leak. 
        We advocate utilizing autograd.grad when creating the graph to keep away from this. If 
        it's a must to use this operate, be certain that to reset the .grad fields of your 
        parameters to None after use to interrupt the cycle and keep away from the leak. 
        (Triggered internally at C:cbpytorch_1000000000000worktorchcsrcautogradengine.cpp:1156.)
          Variable._execution_engine.run_backward(  # Calls into the C++ engine to 
        run the backward move

    And we’ll comply with the recommendation and use autograd.grad now.

    Taking derivatives with autograd.grad operate

    Now let’s transfer from the in some way high-level .backward() technique to decrease degree grad technique that explicitly calculates by-product of 1 tensor with respect to a different.

    from torch.autograd import grad
    
    a = torch.tensor([3.], requires_grad=True)
    b = a * a * a
    db_da = grad(b, a, create_graph=True)[0]
    assert db_da.requires_grad is True

    Equally, as with backward, the by-product of b with respect to a could be handled as a operate and differentiated additional. So in different phrases, the create_graph flag could be understood as: when calculating gradients, hold the historical past of how they have been calculated, so we will deal with them as non-leaf tensors that require grad, and use additional.

    Particularly, we will calculate second-order by-product:

    d2b_da2 = grad(db_da, a, create_graph=True)[0]
    # Facet notice: the grad operate returns a tuple and the primary ingredient of it's what we want.
    assert d2b_da2.merchandise() == 18
    assert d2b_da2.requires_grad is True

    As mentioned earlier than: that is really the important thing property that permits us to do PINN with pytorch.

    Wrapping up

    Most tutorials about PyTorch gradients deal with backpropagation in classical supervised studying. This one explored a special perspective — one formed by the wants of PINNs and different gradient-hungry beasts.

    We learnt what leaves are within the PyTorch jungle, why gradients are retained by default just for leaf nodes, and the best way to retain them when wanted for different tensors. We noticed how create_graph turns gradients into differentiable residents of the autograd world.

    However there are nonetheless many issues to uncover — particularly why gradients of non-scalar features require additional care, the best way to compute second-order derivatives with out utilizing your complete RAM, and why slicing your enter tensor is a foul concept if you want an elementwise gradient.

    So let’s meet in Half 2, the place we’ll take a better have a look at grad 👋



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article12 famous AI disasters. Insights from data and machine learning… | by Aditi Arya | Jun, 2025
    Next Article Why CEOs Must Lead With Their Face, Not Just Their Title
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Try This AI-Powered Stock Picker

    June 29, 2025

    How Shoott Found a Customer Base It Wasn’t Expecting

    June 8, 2025

    Focus on Your Health — or Your Startup Won’t Survive

    May 23, 2025
    Our Picks

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025

    Musk’s X appoints ‘king of virality’ in bid to boost growth

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.