Close Menu
    Trending
    • Candy AI NSFW AI Video Generator: My Unfiltered Thoughts
    • Anaconda : l’outil indispensable pour apprendre la data science sereinement | by Wisdom Koudama | Aug, 2025
    • Automating Visual Content: How to Make Image Creation Effortless with APIs
    • A Founder’s Guide to Building a Real AI Strategy
    • Starting Your First AI Stock Trading Bot
    • Peering into the Heart of AI. Artificial intelligence (AI) is no… | by Artificial Intelligence Details | Aug, 2025
    • E1 CEO Rodi Basso on Innovating the New Powerboat Racing Series
    • When Models Stop Listening: How Feature Collapse Quietly Erodes Machine Learning Systems
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Transformers (and Attention) are Just Fancy Addition Machines
    Artificial Intelligence

    Transformers (and Attention) are Just Fancy Addition Machines

    Team_AIBS NewsBy Team_AIBS NewsJuly 24, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    is a comparatively new sub-field in AI, targeted on understanding how neural networks operate by reverse-engineering their inside mechanisms and representations, aiming to translate them into human-understandable algorithms and ideas. That is in distinction to and additional than conventional explainability methods like SHAP and LIME.

    SHAP stands for SHapley Additive exPlanations. It computes the contribution of every characteristic to the prediction of the mannequin, domestically and globally, that’s for a single instance in addition to throughout the entire dataset. This permits SHAP for use to find out characteristic significance usually for the use case. LIME, in the meantime, works on a single example-prediction pair the place it perturbs the instance enter and makes use of the perturbations and its outputs to approximate an easier substitute of the black-box mannequin. As such, each of those work at a characteristic stage and provides us some rationalization and heuristic to gauge how every enter into the mannequin impacts its prediction or output.

    Alternatively, mechanistic interpretation understands issues at a extra granular stage in that it’s able to offering a pathway of how the stated characteristic is learnt by totally different neurons in several layers within the neural community, and the way that studying evolves over the layers within the community. This makes it adept at tracing paths contained in the community for a selected characteristic and additionally seeing how that characteristic impacts the result. 

    SHAP and LIME, then, reply the query “which characteristic contributes probably the most to the result?” whereas mechanistic interpretation solutions the query “which neurons activate for which characteristic, and the way does that characteristic evolve and have an effect on the result of the community?“

    Since explainability usually is an issue with deeper networks, this sub-field majorly works with deeper fashions just like the transformers. There are a couple of locations the place mechanistic interpretability seems to be at transformers in a different way than the standard manner, one in every of which is multi-head consideration. As we’ll see, this distinction is in reframing the multiplication and concatenation operations as outlined within the “Consideration is All You Want” paper as addition operations which opens an entire vary of latest potentialities.

    However first, a recap of the Transformer structure.

    Transformer Structure

    Picture by Creator: Transformer Structure

    These are the sizes we work with:

    • batch_size B =1;
    • sequence size S = 20;
    • vocab_size V = 50,000;
    • hidden_dims D = 512;
    • heads H = 8

    Because of this the variety of dimensions within the Q, Okay, V vectors is 512/8 (L) = 64. (In case you don’t bear in mind, an analogy for understanding question, key and worth: The thought is that for a token at a given place (Okay), primarily based on its context (Q) we wish to get alignment (reweighing) to the positions it’s related to (V).)

    These are the steps upto the eye computation in a transformer. (The form of tensors is assumed for instance for higher understanding. Numbers in italic characterize the dimension alongside which the matrix is multiplied.)

    Step Operation Enter 1 Dims (Form) Enter 2 Dims (Form) Output Dims (Form)
    1 N/A B x S x V
    (1 x 20 x 50,000)
    N/A B x S x V
    (1 x 20 x 50,000)
    2 Get embeddings B x S x V
    (1 x 20 x 50,000)
    V x D
    (50,000 x 512)
    B x S x D
    (1 x 20 x 512)
    3 Add positional embeddings B x S x D
    (1 x 20 x 512)
    N/A B x S x D
    (1 x 20 x 512)
    4 Copy embeddings to Q, Okay, V B x S x D
    (1 x 20 x 512)
    N/A B x S x D
    (1 x 20 x 512)
    5 Linear remodel for every head H=8 B x S x D
    (1 x 20 x 512)
    D x L
    (512 x 64)
    BxHxSxL
    (1 x 1 x 20 x 64)
    6 Scaled Dot Product (Q@Okay’) in every head BxHxSxL
    (1 x 1 x 20 x 64)
    (LxSxHxB)
    (64 x 20 x 1 x 1)
    BxHxSxS
    (1 x 1 x 20 x 20) 
    7 Scaled Dot Product (Consideration calculation) Q@Okay’V in every head BxHxSxS
    (1 x 1 x 20 x 20)
    BxHxSxL
    (1 x 1 x 20 x 64)
    BxHxSxL
    (1 x 1 x 20 x 64)
    8 Concat throughout all heads H=8 BxHxSxL
    (1 x 1 x 20 x 64)
    N/A B x S x D
    (1 x 20 x 512)
    9 Linear projection B x S x D
    (1 x 20 x 512)
    D x D
    (512 x 512)
    B x S x D
    (1 x 20 x 512)
    Tabular view of form transformations in the direction of consideration computation within the Transformer

    The desk defined intimately:

    1. We begin with one enter sentence of a sequence size of 20 that’s one-hot encoded to characterize phrases within the vocabulary current within the sequence. Form (B x S x V): (1 x 20 x 50,000)
    2. We multiply this enter with the learnable embedding matrix Wₑ of form (V x D) to get the embeddings. Form (B x S x D): (1 x 20 x 512)
    3. Subsequent a learnable positional encoding matrix of the identical form is added to the embeddings
    4. The resultant embeddings are then copied to the matrices Q, Okay and V. Q, Okay and V every are cut up and reshaped on the D dimension. Form (B x S x D): (1 x 20 x 512)
    5. The matrices for Q, Okay and V are every fed to a linear transformation layer that multiplies them with learnable weight matrices every of form (D x L) Wq, Wₖ and Wᵥ, respectively (one copy for every of the H=8 heads). Form (B x H x S x L): (1 x 1 x 20 x 64) the place H=1, as that is the resultant form for every head.
    6. Subsequent, we compute consideration with Scaled Dot Product consideration the place Q and Okay (transposed) are multiplied first in every head. Form (B x H x S x L) x (L x S x H x B) → (B x H x S x S): (1 x 1 x 20 x 20). 
    7. There’s a scaling and masking step subsequent that I’ve skipped as that isn’t essential in understanding what’s the totally different manner of taking a look at MHA. So, subsequent we multiply QK with V for every head. Form (B x H x S x S) x (B x H x S x L) → (B x H x S x L): (1 x 1 x 20 x 64)
    8. Concat: Right here, we concatenate the outcomes of consideration from all of the heads on the L dimension to get again a form of (B x S x D) → (1 x 20 x 512)
    9. This output is as soon as extra linearly projected utilizing yet one more learnable weight matrix Wₒ of form (D x D). Closing form we finish with (B x S x D): (1 x 20 x 512)

    Reimagining Multi-Head Consideration

    Picture by Creator: Reimagining Multi-head consideration

    Now, let’s see how the sector of mechanistic interpretation seems to be at this, and we will even see why it’s mathematically equal. On the appropriate within the picture above, you see the module that reimagines multi-head consideration. 

    As a substitute of concatenating the eye output, we proceed with the multiplication “inside” the heads itself the place now the form of Wₒ is (L x D) and multiply with QK’V of form (B x H x S x L) to get the results of form (B x S x H x D): (1 x 20 x 1 x 512). Then, we sum over the H dimension to once more finish with the form (B x S x D): (1 x 20 x 512).

    From the desk above, the final two steps are what adjustments:

    Step Operation Enter 1 Dims (Form) Enter 2 Dims (Form) Output Dims (Form)
    8 Matrix multiplication in every head H=8 BxHxSxL
    (1 x 1 x 20 x 64)
    L x D
    (64 x 512)
    BxSxHxD
    (1 x 20 x 1 x 512)
    9 Sum over heads (H dimension) BxSxHxD
    (1 x 20 x 1 x 512)
    N/A B x S x D
    (1 x 20 x 512)

    Aspect word: This “summing over” is harking back to how summing over totally different channels occurs in CNNs. In CNNs, every filter operates on the enter, after which we sum the outputs throughout channels. Identical right here — every head might be seen as a channel, and the mannequin learns a weight matrix to map every head’s contribution into the ultimate output house.

    However why is the mission + sum mathematically equal to concat + mission? In brief, as a result of the projection weights within the mechanistic perspective are simply sliced variations of the weights within the conventional view (sliced throughout the D dimension and cut up to match every head).

    Picture by Creator: Why the re-imagining works

    Let’s give attention to the H and D dimensions earlier than the multiplication with Wₒ. From picture above, every head now has a vector of dimension 64 that’s being multiplied with the load matrix of form (64 x 512). Let’s denote the outcome by R and head by h.

    To get R₁₁, we have now this equation: 

    R₁,₁ = h₁,₁ x Wₒ₁,₁ + h₁,₂ x Wₒ₂,₁ + …. + h₁ₓ₆₄ x Wₒ₆₄,₁

    Now let’s say we had a concatenated the heads to get an consideration output form of (1 x 512) and the load matrix of form (512, 512) then the equation would have been:

    R₁,₁ = h₁,₁ x Wₒ₁,₁ + h₁,₂ x Wₒ₂,₁ + …. + h₁ₓ₅₁₂ x Wₒ₅₁₂,₁

    So, the half h₁ₓ₆₅ x Wₒ₆₅,₁ + … + h₁ₓ₅₁₂ x Wₒ₅₁₂,₁ would have been added. However this half being added is the half that’s current in every of the opposite heads in modulo 64 vogue. Mentioned one other manner, if there isn’t a concatenation, Wₒ₆₅,₁ is the worth behind Wₒ₁,₁ within the second head, Wₒ₁₂₉,₁ is the worth behind Wₒ₁,₁ within the third head and so forth if we think about that the values for every head sit behind each other. Therefore, even with out concatenation, the “summing over the heads” operation leads to the identical values being added.

    In conclusion, this perception lays the inspiration of taking a look at transformers as purely additive fashions in that every one the operations in a transformer take the preliminary embedding and add to it. This view opens up new potentialities like tracing options as they’re learnt through additions via the layers (known as circuit tracing) which is what mechanistic interpretability is about as I’ll present in my subsequent articles.


    We have now proven that this view is mathematically equal to the vastly totally different view that multi-head consideration, by splitting Q,Okay,V parallelizes and optimizes computation of consideration. Learn extra about this on this weblog here and the precise paper that introduces these factors is here.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUnderstanding the Attention Mechanism: The Brain Behind Seq2Seq’s Success | by Shishiradhikari | Jul, 2025
    Next Article Four-Day Workweek Study: Employees Happier, More Productive
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Candy AI NSFW AI Video Generator: My Unfiltered Thoughts

    August 2, 2025
    Artificial Intelligence

    Starting Your First AI Stock Trading Bot

    August 2, 2025
    Artificial Intelligence

    When Models Stop Listening: How Feature Collapse Quietly Erodes Machine Learning Systems

    August 2, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Candy AI NSFW AI Video Generator: My Unfiltered Thoughts

    August 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Don’t Build Up Relationship Debt!

    April 30, 2025

    Trump Auto Tariffs: How Major Car Brands Would Be Affected

    March 27, 2025

    Top 10 Python Scripts to Automate Data Science Tasks | by Timothy Kimutai | Jun, 2025

    June 22, 2025
    Our Picks

    Candy AI NSFW AI Video Generator: My Unfiltered Thoughts

    August 2, 2025

    Anaconda : l’outil indispensable pour apprendre la data science sereinement | by Wisdom Koudama | Aug, 2025

    August 2, 2025

    Automating Visual Content: How to Make Image Creation Effortless with APIs

    August 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.