Close Menu
    Trending
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    • Cuba’s Energy Crisis: A Systemic Breakdown
    • AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»The Math Behind In-Context Learning | by Shitanshu Bhushan | Dec, 2024
    Artificial Intelligence

    The Math Behind In-Context Learning | by Shitanshu Bhushan | Dec, 2024

    Team_AIBS NewsBy Team_AIBS NewsJanuary 1, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    In 2022, Anthropic launched a paper the place they confirmed proof that induction head might constitute the mechanism for ICL. What are induction heads? As acknowledged by Anthropic — “Induction heads are carried out by a circuit consisting of a pair of consideration heads in numerous layers that work collectively to repeat or full patterns.”, merely put what the induction head does is given a sequence like — […, A, B,…, A] it would full it with B with the reasoning that if A is adopted by B earlier within the context, it’s probably that A is adopted by B once more. When you’ve a sequence like “…A, B…A”, the primary consideration head copies earlier token information into every place, and the second consideration head makes use of this information to search out the place A appeared earlier than and predict what got here after it (B).

    Just lately a number of analysis has proven that transformers may very well be doing ICL via gradient descent (Garg et al. 2022, Oswald et al. 2023, and so forth) by exhibiting the relation between linear consideration and gradient descent. Let’s revisit least squares and gradient descent,

    Supply: Picture by Creator

    Now let’s see how this hyperlinks with linear consideration

    Right here we deal with linear consideration as identical as softmax consideration minus the softmax operation. The essential linear consideration components,

    Supply: Picture by Creator

    Let’s begin with a single-layer building that captures the essence of in-context studying. Think about we’ve got n coaching examples (x₁,y₁)…(xₙ,yₙ), and we wish to predict y_{n+1} for a brand new enter x_{n+1}.

    Supply: Picture by Creator

    This appears to be like similar to what we acquired with gradient descent, besides in linear consideration we’ve got an additional time period ‘W’. What linear consideration is implementing is one thing generally known as preconditioned gradient descent (PGD), the place as an alternative of the usual gradient step, we modify the gradient with a preconditioning matrix W,

    Supply: Picture by Creator

    What we’ve got proven right here is that we will assemble a weight matrix such that one layer of linear consideration will do one step of PGD.

    We noticed how consideration can implement “studying algorithms”, these are algorithms the place principally if we offer a lot of demonstrations (x,y) then the mannequin learns from these demonstrations to foretell the output of any new question. Whereas the precise mechanisms involving a number of consideration layers and MLPs are advanced, researchers have made progress in understanding how in-context studying works mechanistically. This text offers an intuitive, high-level introduction to assist readers perceive the internal workings of this emergent skill of transformers.

    To learn extra on this matter, I’d recommend the next papers:

    In-context Learning and Induction Heads

    What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

    Transformers Learn In-Context by Gradient Descent

    Transformers learn to implement preconditioned gradient descent for in-context learning

    This weblog put up was impressed by coursework from my graduate research throughout Fall 2024 at College of Michigan. Whereas the programs supplied the foundational information and motivation to discover these matters, any errors or misinterpretations on this article are totally my very own. This represents my private understanding and exploration of the fabric.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticlePRACTICAL DEMONSTRATION OF LINEAR REGRESSION | by Ajuruvictor | Jan, 2025
    Next Article 4 simple strategies to declutter and get organized
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    How LLMs Work: Pre-Training to Post-Training, Neural Networks, Hallucinations, and Inference

    February 19, 2025

    Pornhub and three other porn sites face EU child safety probe

    May 27, 2025

    Statistics for Data Science 101 Series — Inferential Statistics | by Adith Narasimhan Kumar | Mar, 2025

    March 3, 2025
    Our Picks

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025

    Cuba’s Energy Crisis: A Systemic Breakdown

    July 1, 2025

    AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.