Close Menu
    Trending
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    • Millions of websites to get ‘game-changing’ AI bot blocker
    • I Worked Through Labor, My Wedding and Burnout — For What?
    • Cloudflare will now block AI bots from crawling its clients’ websites by default
    • 🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Data Science»Cerebras Reports Fastest DeepSeek R1 Distill Llama 70B Inference
    Data Science

    Cerebras Reports Fastest DeepSeek R1 Distill Llama 70B Inference

    Team_AIBS NewsBy Team_AIBS NewsFebruary 3, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Cerebras Programs right now introduced what it stated is record-breaking efficiency for DeepSeek-R1-Distill-Llama-70B inference, reaching greater than 1,500 tokens per second – 57 instances sooner than GPU-based options.

    Cerebras stated this velocity allows prompt reasoning capabilities for one of many business’s most refined open-weight fashions, operating solely on U.S.-based AI infrastructure with zero knowledge retention.

    “DeepSeek R1 represents a brand new frontier in AI reasoning capabilities, and right now we’re making it accessible on the business’s quickest speeds,” stated Hagay Lupesko, SVP of AI Cloud, Cerebras. “By reaching greater than 1,500 tokens per second on our Cerebras Inference platform, we’re reworking minutes-long reasoning processes into near-instantaneous responses, basically altering how builders and enterprises can leverage superior AI fashions.”

    Powered by the Cerebras Wafer Scale Engine, the platform demonstrates real-world performance improvements. A normal coding immediate that takes 22 seconds on aggressive platforms completes in simply 1.5 seconds on Cerebras – a 15x enchancment in time to consequence. This breakthrough allows sensible deployment of refined reasoning fashions that historically require in depth computation time.

    DeepSeek-R1-Distill-Llama-70B combines the superior reasoning capabilities of DeepSeek’s 671B parameter Combination of Specialists (MoE) mannequin with Meta’s widely-supported Llama structure. Regardless of its environment friendly 70B parameter dimension, the mannequin demonstrates superior efficiency on advanced arithmetic and coding duties in comparison with bigger fashions.

    “Safety and privateness are paramount for enterprise AI deployment,” continued Lupesko. “By processing all inference requests in U.S.-based knowledge facilities with zero knowledge retention, we’re making certain that organizations can leverage cutting-edge AI capabilities whereas sustaining strict knowledge governance requirements. Knowledge stays within the U.S. 100% of the time and belongs solely to the client.”

    The DeepSeek-R1-Distill-Llama-70B mannequin is accessible instantly by way of Cerebras Inference, with API entry accessible to pick clients by way of a developer preview program. For extra details about accessing prompt reasoning capabilities for functions, go to www.cerebras.ai/contact-us.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMEMS Clocks: Saving Power For AI
    Next Article The Stargate Program: A Financial Black Hole? | by Avinash Saravanan (アビナッシュ・サラバナン) | Feb, 2025
    Team_AIBS News
    • Website

    Related Posts

    Data Science

    GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why

    July 1, 2025
    Data Science

    Futurwise: Unlock 25% Off Futurwise Today

    July 1, 2025
    Data Science

    National Lab’s Machine Learning Project to Advance Seismic Monitoring Across Energy Industries

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Implementing IBCS rules in Power BI

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Power Hungry: Google in Data Center Agreement for Small Modular Nuclear Reactors

    January 31, 2025

    The Role of Machine Learning in Developing Realistic Adult Content

    March 22, 2025

    NSFW Art Generator Review and Key Features

    June 12, 2025
    Our Picks

    Implementing IBCS rules in Power BI

    July 1, 2025

    What comes next for AI copyright lawsuits?

    July 1, 2025

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.