Close Menu
    Trending
    • How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures
    • Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025
    • How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins
    • Become a Better Data Scientist with These Prompt Engineering Tips and Tricks
    • Meanwhile in Europe: How We Learned to Stop Worrying and Love the AI Angst | by Andreas Maier | Jul, 2025
    • Transform Complexity into Opportunity with Digital Engineering
    • OpenAI Is Fighting Back Against Meta Poaching AI Talent
    • Lessons Learned After 6.5 Years Of Machine Learning
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Data Science»AI Inference: Meta Teams with Cerebras on Llama API
    Data Science

    AI Inference: Meta Teams with Cerebras on Llama API

    Team_AIBS NewsBy Team_AIBS NewsMay 2, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Sunnyvale, CA — Meta has teamed with Cerebras on AI inference in Meta’s new Llama API, combining  Meta’s open-source Llama fashions with inference expertise from Cerebras.

    Builders constructing on the Llama 4 Cerebras mannequin within the API can anticipate speeds as much as 18 occasions sooner than conventional GPU-based options, in response to Cerebras. “This acceleration unlocks a wholly new technology of purposes which are inconceivable to construct on different expertise. Conversational low latency voice, interactive code technology, on the spot multi-step reasoning, and real-time brokers — all of which require chaining a number of LLM calls — can now be accomplished in seconds fairly than minutes,” Cerebras stated.

    By partnering with Meta to serve Llama fashions from Meta’s new API service, Cerebras positive factors publicity to an expanded developer viewers and deepens its enterprise and partnership with Meta and their unimaginable groups.

    Since launching its inference options in 2024, Cerebras has delivered the world’s quickest Llama inference, serving billions of tokens by way of its personal AI infrastructure. The broad developer group now has direct entry to a sturdy, OpenAI-class different for constructing clever, real-time methods — backed by Cerebras pace and scale.

    “Cerebras is proud to make Llama API the quickest inference API on the earth,” stated Andrew Feldman, CEO and co-founder of Cerebras. “Builders constructing agentic and real-time apps want pace. With Cerebras on Llama API, they will construct AI methods which are essentially out of attain for main GPU-based inference clouds.”

    Cerebras is the quickest AI inference resolution as measured by third occasion benchmarking web site Synthetic Evaluation, reaching over 2,600 token/s for Llama 4 Scout in comparison with ChatGPT at ~130 tokens/sec and DeepSeek at ~25 tokens/sec.

    Builders will have the ability to entry to the quickest Llama 4 inference by choosing Cerebras from the mannequin choices inside the Llama API. This streamlined expertise will make it straightforward to prototype, construct, and scale real-time AI purposes. To join early entry to the Llama API and to expertise Cerebras pace at present, go to www.cerebras.ai/inference.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCo-op cyber attack includes customer data, firm admits
    Next Article The Misunderstood Intelligence: Addressing Public Concerns About AI | by Professor Lucius | May, 2025
    Team_AIBS News
    • Website

    Related Posts

    Data Science

    National Lab’s Machine Learning Project to Advance Seismic Monitoring Across Energy Industries

    July 1, 2025
    Data Science

    University of Buffalo Awarded $40M to Buy NVIDIA Gear for AI Center

    June 30, 2025
    Data Science

    Re-Engineering Ethernet for AI Fabric

    June 28, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Robot Videos: Figure Helix, Swimming Robots, MIT Media Lab

    February 22, 2025

    Still Paying for Adobe Acrobat? Try This Instead.

    December 15, 2024

    Demystifying RAG and Vector Databases: The Building Blocks of Next-Gen AI Systems 🧠✨ | by priyesh tiwari | Dec, 2024

    December 29, 2024
    Our Picks

    How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures

    July 1, 2025

    Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025

    July 1, 2025

    How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.