Close Menu
    Trending
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    • Cuba’s Energy Crisis: A Systemic Breakdown
    • AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Label Bias in ML. In 2018, Amazon scrapped an AI-driven… | by Mariyam Alshatta | Mar, 2025
    Machine Learning

    Label Bias in ML. In 2018, Amazon scrapped an AI-driven… | by Mariyam Alshatta | Mar, 2025

    Team_AIBS NewsBy Team_AIBS NewsMarch 28, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Created by an AI

    In 2018, Amazon scrapped an AI-driven hiring tool after discovering a crucial flaw — it was systematically downgrading resumes that contained the phrase “girls’s” (as in “girls’s chess membership” or “girls’s management program”). The algorithm wasn’t explicitly programmed to discriminate, but it surely had discovered from ten years of hiring knowledge, the place male candidates have been disproportionately employed for technical roles. Because of this, the AI concluded that male candidates have been preferable and penalized something related to girls.

    This wasn’t a failure of the algorithm itself — it was a failure of the labels used to coach it. As a result of previous hiring selections labeled profitable candidates as “certified” and rejected candidates as “unqualified,” the mannequin absorbed historic biases as in the event that they have been goal truths. Label bias like this happens when coaching labels are flawed, inconsistent, or replicate human prejudices, main AI programs to internalize and reinforce systemic discrimination.

    This isn’t only a hiring drawback. Label bias can corrupt fraud detection fashions, medical analysis programs, and even felony justice algorithms, embedding previous errors into future selections. If the labels used to coach a mannequin are biased, the mannequin itself will likely be biased — irrespective of how superior the algorithm is.

    So right here’s the actual query: How can we make sure that the labels we use to coach fashions replicate actuality somewhat than reproducing previous biases?

    Label bias happens when the labels used to coach a machine studying mannequin are flawed, inconsistent, or inherently biased, inflicting the mannequin to internalize and perpetuate incorrect patterns. As a result of fashions be taught solely from their coaching knowledge, any bias within the labeling course of will get embedded of their decision-making, no matter how refined the algorithm is. Not like feature selection bias, which arises from choosing deceptive enter options, label bias originates from the classification course of itself — the best way outcomes are outlined, categorized, or assigned throughout coaching.

    One of the widespread causes of label bias is historical bias, the place fashions inherit prejudices from previous human selections. If a hiring mannequin is educated on ten years of recruitment knowledge the place girls have been disproportionately neglected for management roles…



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleTrump Pardons Trevor Milton, Founder of Bankrupt Truck Maker Nikola
    Next Article The first trial of generative AI therapy shows it might help with depression
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025
    Machine Learning

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025
    Machine Learning

    🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Why TikTok Users Are Downloading ‘Red Note,’ the Chinese App

    January 14, 2025

    Big Tech, Small Price—Get a MacBook Air for $230

    December 30, 2024

    Exploring Text-to-Speech Technology for Video Game Narration

    June 26, 2025
    Our Picks

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025

    Cuba’s Energy Crisis: A Systemic Breakdown

    July 1, 2025

    AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.