Close Menu
    Trending
    • What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model
    • Questioning Assumptions & (Inoculum) Potential | by Jake Winiski | Aug, 2025
    • FFT: The 60-Year Old Algorithm Underlying Today’s Tech
    • Highest-Paying Jobs For Older Adults: New Report
    • BofA’s Quiet AI Revolution—$13 Billion Tech Plan Aims to Make Banking Smarter, Not Flashier
    • Unveiling LLM Secrets: Visualizing What Models Learn | by Suijth Somanunnithan | Aug, 2025
    • Definite Raises $10M for AI-Native Data Stack
    • Mark Rober becomes the latest YouTube star to secure Netflix deal
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»AI Technology»How to build a better AI benchmark
    AI Technology

    How to build a better AI benchmark

    Team_AIBS NewsBy Team_AIBS NewsMay 8, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The bounds of conventional testing

    If AI corporations have been gradual to answer the rising failure of benchmarks, it’s partially as a result of the test-scoring method has been so efficient for therefore lengthy. 

    One of many largest early successes of latest AI was the ImageNet problem, a form of antecedent to up to date benchmarks. Launched in 2010 as an open problem to researchers, the database held greater than 3 million pictures for AI programs to categorize into 1,000 completely different courses.

    Crucially, the check was fully agnostic to strategies, and any profitable algorithm rapidly gained credibility no matter the way it labored. When an algorithm known as AlexNet broke via in 2012, with a then unconventional type of GPU coaching, it grew to become one of many foundational outcomes of recent AI. Few would have guessed upfront that AlexNet’s convolutional neural nets could be the key to unlocking picture recognition—however after it scored properly, nobody dared dispute it. (One in all AlexNet’s builders, Ilya Sutskever, would go on to cofound OpenAI.)

    A big a part of what made this problem so efficient was that there was little sensible distinction between ImageNet’s object classification problem and the precise means of asking a pc to acknowledge a picture. Even when there have been disputes about strategies, nobody doubted that the highest-scoring mannequin would have a bonus when deployed in an precise picture recognition system.

    However within the 12 years since, AI researchers have utilized that very same method-agnostic method to more and more normal duties. SWE-Bench is often used as a proxy for broader coding capability, whereas different exam-style benchmarks usually stand in for reasoning capability. That broad scope makes it troublesome to be rigorous about what a selected benchmark measures—which, in flip, makes it arduous to make use of the findings responsibly. 

    The place issues break down

    Anka Reuel, a PhD scholar who has been specializing in the benchmark downside as a part of her analysis at Stanford, has change into satisfied the analysis downside is the results of this push towards generality. “We’ve moved from task-specific fashions to general-purpose fashions,” Reuel says. “It’s not a few single activity anymore however a complete bunch of duties, so analysis turns into more durable.”

    Just like the College of Michigan’s Jacobs, Reuel thinks “the principle difficulty with benchmarks is validity, much more than the sensible implementation,” noting: “That’s the place a variety of issues break down.” For a activity as difficult as coding, as an illustration, it’s practically inconceivable to include each attainable situation into your downside set. Because of this, it’s arduous to gauge whether or not a mannequin is scoring higher as a result of it’s extra expert at coding or as a result of it has extra successfully manipulated the issue set. And with a lot stress on builders to attain file scores, shortcuts are arduous to withstand.

    For builders, the hope is that success on a lot of particular benchmarks will add as much as a typically succesful mannequin. However the strategies of agentic AI imply a single AI system can embody a posh array of various fashions, making it arduous to judge whether or not enchancment on a selected activity will result in generalization. “There’s simply many extra knobs you’ll be able to flip,” says Sayash Kapoor, a pc scientist at Princeton and a distinguished critic of sloppy practices within the AI business. “In terms of brokers, they’ve form of given up on the perfect practices for analysis.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMy Journey with Google Cloud’s “Build Real World AI Applications with Gemini and Imagen” | by Mitpatel | May, 2025
    Next Article Uh-Uh, Not Guilty | Towards Data Science
    Team_AIBS News
    • Website

    Related Posts

    AI Technology

    Beyond KYC: AI-Powered Insurance Onboarding Acceleration

    August 21, 2025
    AI Technology

    In a first, Google has released data on how much energy an AI prompt uses

    August 21, 2025
    AI Technology

    Finding “Silver Bullet” Agentic AI Flows with syftr

    August 19, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model

    August 22, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    How to make ‘workcations’ work

    June 29, 2025

    Why This CEO Cut a $500,000 Per Month Product — And What Every Founder Can Learn From It

    June 15, 2025

    Google Colab’s Gemini-Powered Data Science Agent: Transforming the future of Data Analysis | by Paras Munoli | Mar, 2025

    March 14, 2025
    Our Picks

    What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model

    August 22, 2025

    Questioning Assumptions & (Inoculum) Potential | by Jake Winiski | Aug, 2025

    August 22, 2025

    FFT: The 60-Year Old Algorithm Underlying Today’s Tech

    August 21, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.