Close Menu
    Trending
    • What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model
    • Questioning Assumptions & (Inoculum) Potential | by Jake Winiski | Aug, 2025
    • FFT: The 60-Year Old Algorithm Underlying Today’s Tech
    • Highest-Paying Jobs For Older Adults: New Report
    • BofA’s Quiet AI Revolution—$13 Billion Tech Plan Aims to Make Banking Smarter, Not Flashier
    • Unveiling LLM Secrets: Visualizing What Models Learn | by Suijth Somanunnithan | Aug, 2025
    • Definite Raises $10M for AI-Native Data Stack
    • Mark Rober becomes the latest YouTube star to secure Netflix deal
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Understanding DeepSeek-R1 paper: Beginner’s guide | by Mehul Gupta | Data Science in your pocket | Jan, 2025
    Machine Learning

    Understanding DeepSeek-R1 paper: Beginner’s guide | by Mehul Gupta | Data Science in your pocket | Jan, 2025

    Team_AIBS NewsBy Team_AIBS NewsJanuary 31, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Massive Language Fashions (LLMs) have been bettering quickly, making them nearer to Synthetic Common Intelligence (AGI) — the form of AI that may assume and motive like people.

    One of many largest enhancements in recent times is post-training — a step achieved after the preliminary mannequin coaching. This helps LLMs:

    Suppose higher (bettering reasoning abilities).

    Align with human values (decreasing dangerous outputs).

    Personalize responses based mostly on consumer preferences.

    Do all this with out utilizing as a lot computing energy as coaching from scratch.

    A breakthrough got here with OpenAI’s o1 fashions, which prolonged the reasoning course of at inference time (when the mannequin is producing responses). This implies the mannequin takes extra time to assume earlier than answering, which considerably improves its efficiency on duties like Maths, Coding, Scientific reasoning

    Nonetheless, scaling this reasoning skill successfully throughout real-time use (test-time scaling) continues to be an open problem.

    Researchers have tried completely different strategies to reinforce reasoning, together with:

    Reward fashions (evaluating how good a response is).

    Reinforcement studying (RL) (educating the mannequin by way of trial and error).

    Search algorithms (Monte Carlo Tree Search, Beam Search, and so on.).

    Thus far, none of those strategies have matched OpenAI’s o1 fashions in reasoning.

    What This Paper Introduces

    The paper explores a brand new means to enhance reasoning utilizing pure reinforcement studying (RL) — which means no supervised knowledge (human-labeled examples). As a substitute, the mannequin learns by itself by way of an RL framework known as GRPO (we’ll focus on this in some depth).

    Utilizing DeepSeek-V3-Base as the muse, they educated a mannequin known as DeepSeek-R1-Zero. Over 1000’s of RL steps, the mannequin:

    Developed highly effective reasoning abilities.

    Improved AIME 2024 benchmark rating from 15.6% → 71.0% (and even 86.7% with majority voting)

    Matched the reasoning skill of OpenAI-o1–0912.

    Nonetheless, DeepSeek-R1-Zero had some issues:

    Poor readability.

    Language mixing (struggled with retaining responses constant).

    To repair these points, they launched DeepSeek-R1, which mixes:

    Chilly-start fine-tuning (coaching with a small quantity of labeled knowledge).

    Reinforcement studying targeted on reasoning.

    Supervised fine-tuning (SFT) utilizing high-quality human-labeled knowledge.

    After these steps, DeepSeek-R1 matched OpenAI-o1–1217 in reasoning.

    Last Contribution: Mannequin Distillation

    Additionally they distilled DeepSeek-R1 into smaller fashions (like Qwen2.5–32B), proving that:

    Bigger fashions be taught higher reasoning patterns.

    Smaller fashions can inherit this data without having advanced RL coaching.

    Their 14B distilled mannequin even outperformed the perfect open-source fashions, setting new benchmarks in reasoning for dense fashions.

    Therefore,

    DeepSeek launched 2 principal fashions, DeepSeek-R1 and DeepSeek-R1-Zero

    Additionally they launched some distilled variations of DeepSeek, extra for deployment functions

    The key discovery is utilizing Reinforcement Studying instantly for bettering reasoning.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSoftBank in Talks to Invest Up to $25 Billion in OpenAI
    Next Article Navigating Data Science Content: Recognizing Common Pitfalls, Part 1 | by Geremie Yeo | Jan, 2025
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Questioning Assumptions & (Inoculum) Potential | by Jake Winiski | Aug, 2025

    August 22, 2025
    Machine Learning

    Unveiling LLM Secrets: Visualizing What Models Learn | by Suijth Somanunnithan | Aug, 2025

    August 21, 2025
    Machine Learning

    Why Netflix Seems to Know You Better Than Your Friends | by Rahul Mishra | Coding Nexus | Aug, 2025

    August 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model

    August 22, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Disney, Universal Sue AI Startup Midjourney: ‘Plagiarism’

    June 11, 2025

    Want to Be a Stronger Leader? Don’t Make These 5 Mistakes

    May 23, 2025

    Meta’s Antitrust Trial to Put Mark Zuckerberg, Serial Witness, to the Test Again

    April 14, 2025
    Our Picks

    What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model

    August 22, 2025

    Questioning Assumptions & (Inoculum) Potential | by Jake Winiski | Aug, 2025

    August 22, 2025

    FFT: The 60-Year Old Algorithm Underlying Today’s Tech

    August 21, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.