Close Menu
    Trending
    • 🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025
    • Futurwise: Unlock 25% Off Futurwise Today
    • 3D Printer Breaks Kickstarter Record, Raises Over $46M
    • People are using AI to ‘sit’ with them while they trip on psychedelics
    • Reinforcement Learning in the Age of Modern AI | by @pramodchandrayan | Jul, 2025
    • How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures
    • Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025
    • How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Fighting Fraud Fairly: Upgrade Your AI Toolkit 🧰 | by CĂ©sar Ortega Quintero | Jan, 2025
    Artificial Intelligence

    Fighting Fraud Fairly: Upgrade Your AI Toolkit 🧰 | by CĂ©sar Ortega Quintero | Jan, 2025

    Team_AIBS NewsBy Team_AIBS NewsJanuary 22, 2025No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Understanding the automobile insurance coverage fraud detection dataset

    For this train, we are going to work with a publicly out there automobile insurance coverage fraud detection dataset [31], which accommodates 15,420 observations and 33 options. The goal variable, FraudFound_P, labels whether or not a declare is fraudulent, with 933 observations (5.98%) recognized as fraud associated. The dataset features a vary of potential predictors, similar to:.

    • Demographic and policy-related options: gender, age, marital standing, automobile class, coverage kind, coverage quantity, driver ranking.
    • Declare-related options: day of week claimed, month claimed, days coverage declare, witness current.
    • Coverage-related options: deductible, make, automobile worth, variety of endorsements.

    Amongst these, gender and age are thought of protected attributes, which suggests we have to pay particular consideration to how they might affect the mannequin’s predictions. Understanding the dataset’s construction and figuring out potential sources of bias are important.

    The enterprise problem

    The purpose of this train is to construct a machine studying mannequin to determine doubtlessly fraudulent motor insurance coverage claims. Fraud detection can considerably enhance declare dealing with effectivity, scale back investigation prices, and decrease losses paid out on fraudulent claims. Nevertheless, the dataset presents a major problem as a result of high-class imbalance, with solely 5.98% of the claims labeled as fraudulent.

    Within the context of fraud detection, false negatives (i.e., missed fraudulent claims) are notably costly, as they end in monetary losses and investigation delays. To handle this, we are going to prioritize the recall metric for figuring out the optimistic class (FraudFound_P = 1). Recall measures the flexibility of the mannequin to seize fraudulent claims, even on the expense of precision, making certain that as many fraudulent claims as potential are recognized and dealt with in a well timed style by analysts within the fraud workforce.

    Baseline mannequin

    Right here, we are going to construct the preliminary mannequin for fraud detection utilizing a set of predictors that embrace demographic and policy-related options, with an emphasis on the gender attribute. For the needs of this train, the gender function has explicitly been included as a predictor to deliberately introduce bias and drive its look within the mannequin, on condition that excluding it could end in a baseline mannequin that isn’t biased. Furthermore, in a real-world setting with a extra complete dataset, there are normally oblique proxies which will leak bias into the mannequin. In observe, it is not uncommon for fashions to inadvertently use such proxies, resulting in undesirable biased predictions, even when the delicate attributes themselves are usually not instantly included.

    As well as, we excluded age as a predictor, aligning with the person equity method referred to as “equity by unawareness,” the place we deliberately take away any delicate attributes that would result in discriminatory outcomes.

    Within the following picture, we current the Classification Outcomes, Distribution of Predicted Chances, and Raise Chart for the baseline mannequin utilizing the XGBoost classifier with a customized threshold of 0.1 (y_prob >= threshold) to determine predicted optimistic fraudulent claims. This mannequin will function a place to begin for measuring and mitigating bias, which we are going to discover in later sections.

    Baseline mannequin outcomes. Picture compiled by the creator

    Based mostly on the classification outcomes and visualizations offered under, we will see that the mannequin reaches a Recall of 86%, which is consistent with our enterprise necessities. Since our major purpose is to determine as many fraudulent claims as potential, excessive recall is essential. The mannequin appropriately identifies many of the fraudulent claims, though the precision for fraudulent claims (17%) is decrease. This trade-off is appropriate given that prime recall ensures that the fraud investigation workforce can deal with most fraudulent claims, minimizing potential monetary losses.

    The distribution of predicted possibilities reveals a major focus of predictions close to zero, indicating that the mannequin is classifying many claims as non-fraudulent. That is anticipated given the extremely imbalanced nature of the dataset (fraudulent claims signify solely 5.98% of the entire claims). Furthermore, the Raise Chart highlights that specializing in the highest deciles supplies important beneficial properties in figuring out fraudulent claims. The mannequin’s potential to extend the detection of fraud within the increased deciles (with a carry of three.5x within the tenth decile) helps the enterprise goal of prioritizing the investigation of claims which might be extra prone to be fraudulent, growing the effectivity of the efforts of the fraud detection workforce.

    These outcomes align with the enterprise purpose of enhancing fraud detection effectivity whereas minimizing prices related to investigating non-fraudulent claims. The recall worth of 86% ensures that we’re not lacking a big portion of fraudulent claims, whereas the carry chart permits us to prioritize sources successfully.

    Measuring bias

    Based mostly on the XGBoost classifier, we consider the potential bias in our fraud detection mannequin utilizing binary metrics from the Holistic AI library. The code snippet under illustrates this.

    from holisticai.bias.metrics import classification_bias_metrics
    from holisticai.bias.plots import bias_metrics_report

    # Outline protected attributes (group_a and group_b)
    group_a_test = X_test['PA_Female'].values
    group_b_test = X_test['PA_Male'].values

    # Consider bias metrics with the customized threshold
    metrics = classification_bias_metrics( group_a=group_a_test, group_b=group_b_test, y_pred=y_pred, y_true=y_test)

    print("Bias Metrics with Customized Threshold: n", metrics)
    bias_metrics_report(model_type='binary_classification', table_metrics=metrics)

    Given the character of the dataset and the enterprise problem, we deal with Equality of Alternative metrics to make sure that people from each teams have equal probabilities of being appropriately labeled based mostly on their true traits. Particularly, we goal to make sure that errors in prediction, similar to false positives or false negatives, are distributed evenly throughout teams. This manner, no group experiences disproportionately extra errors than others, which is crucial for reaching equity in decision-making. For this train, we deal with the gender attribute (female and male), which is deliberately included as a predictor within the mannequin to evaluate its affect on equity.

    Bias metrics report of baseline mannequin. Picture compiled by the creator

    The Equality of Alternative bias metrics generated utilizing a customized threshold of 0.1 for classification are offered under.

    • Equality of Alternative Distinction: -0.126
      This metric instantly evaluates whether or not the true optimistic charge is equal throughout the teams. A adverse worth means that females are barely much less prone to be appropriately labeled as fraudulent in comparison with males, indicating a possible bias favoring males in appropriately figuring out fraud.
    • False Optimistic Fee Distinction: -0.076
      The False Optimistic Fee distinction is inside the truthful interval [-0.1, 0.1], indicating no important disparity within the false optimistic charges between teams.
    • Common Odds Distinction: -0.101
      Common odds distinction measures the steadiness of true optimistic and false optimistic charges throughout teams. A adverse worth right here means that the mannequin could be barely much less correct in figuring out fraudulent claims for females than for males.
    • Accuracy Distinction: 0.063
      The Accuracy distinction is inside the truthful interval [-0.1, 0.1], indicating minimal bias in total accuracy between teams.

    There are small however important disparities in Equality of Alternative and Common Odds Distinction, with females being barely much less prone to be appropriately labeled as fraudulent. This means a possible space for enchancment, the place additional steps may very well be taken to cut back these biases and improve equity for each teams.

    As we proceed within the subsequent sections, we’ll discover strategies for mitigating this bias and enhancing equity, whereas striving to keep up mannequin efficiency.

    Mitigating bias

    Within the effort to mitigate bias from the baseline mannequin, the binary mitigation algorithms included within the Holistic AI library have been examined. These algorithms used could be categorized into three varieties:

    • Pre-processing strategies goal to switch the enter information such that any mannequin educated on it could not exhibit biases. These strategies alter the information distribution to make sure equity earlier than coaching begins. The algorithm evaluated have been, Correlation Remover, Disparate Impression Remover, Studying Truthful Representations and Reweighing.
    • In-processing strategies alter the training course of itself, instantly influencing the mannequin throughout coaching to make sure fairer predictions. These strategies goal to realize equity in the course of the optimization course of. The algorithm evaluated have been, Adversarial Debiasing, Exponentiated Gradient, Grid Search Discount, Meta Truthful Classifier, and Prejudice Remover.
    • Publish-processing strategies alter the mannequin’s predictions after it has been educated, making certain that the ultimate predictions fulfill some statistical measure of equity. The algorithm evaluated have been, Calibrated Equalized Odds, Equalized Odds, LP Debiaser, ML Debiaser, and Reject Choice.

    The outcomes from making use of the varied mitigation algorithms, specializing in key efficiency and equity metrics are offered within the accompanying desk.

    Comparability desk of mitigation algorithm outcomes. Desk compiled by the creator

    Whereas not one of the algorithms examined outperformed the baseline mannequin, the Disparate affect remover (a pre-processing technique) and Equalized odds (a post-processing technique) confirmed promising outcomes. Each algorithms improved the equity metrics considerably, however neither produced outcomes as near the baseline mannequin’s efficiency as anticipated. Furthermore, I discovered that adjusting the brink for Disparate affect remover and Equalized odds facilitated matching baseline efficiency whereas retaining equality of alternative bias metrics inside the truthful interval.

    Following tutorial suggestions stating that post-processing strategies could be considerably sub-optimal (Woodworth et al., 2017)[32], in that they affect on the mannequin after it was discovered and may result in increased efficiency degradation when in comparison with different strategies (Ding et al., 2021)[33], I made a decision to prioritize the Disparate Impression Remover pre-processing algorithm over the post-processing Equalized Odds technique. The code snippet under illustrates this course of.

    from holisticai.bias.mitigation import (AdversarialDebiasing, ExponentiatedGradientReduction, GridSearchReduction, MetaFairClassifier, 
    PrejudiceRemover, CorrelationRemover, DisparateImpactRemover, LearningFairRepresentation, Reweighing,
    CalibratedEqualizedOdds, EqualizedOdds, LPDebiaserBinary, MLDebiaser, RejectOptionClassification)
    from holisticai.pipeline import Pipeline

    # Step 1: Outline the Disparate Impression Remover (Pre-processing)
    mitigator = DisparateImpactRemover(repair_level=1.0) # Restore stage: 0.0 (no change) to 1.0 (full restore)

    # Step 2: Outline the XGBoost mannequin
    mannequin = XGBClassifier(use_label_encoder=False, eval_metric='logloss')

    # Step 3: Create a pipeline with Disparate Impression Remover and XGBoost
    pipeline = Pipeline(steps=[
    ('scaler', StandardScaler()), # Standardize the data
    ('bm_preprocessing', mitigator), # Apply bias mitigation
    ('estimator', model) # Train the XGBoost model
    ])

    # Step 4: Match the pipeline
    pipeline.match(
    X_train_processed, y_train,
    bm__group_a=group_a_train, bm__group_b=group_b_train # Move delicate teams
    )

    # Step 5: Make predictions with the pipeline
    y_prob = pipeline.predict_proba(
    X_test_processed,
    bm__group_a=group_a_test, bm__group_b=group_b_test
    )[:, 1] # Chance for the optimistic class

    # Step 6: Apply a customized threshold
    threshold = 0.03
    y_pred = (y_prob >= threshold).astype(int)

    We additional personalized the Disparate affect remover algorithm by reducing the chance threshold, aiming to enhance mannequin equity whereas sustaining key efficiency metrics. This adjustment was made to discover the potential affect on each mannequin efficiency and bias mitigation.

    Outcomes of de-biased mannequin (Disparate Impression Remover). Picture compiled by the creator

    The outcomes present that by adjusting the brink from 0.1 to 0.03, we considerably improved recall for fraudulent claims (from 0.528 within the baseline to 0.863), however at the price of precision (which dropped from 0.225 to 0.172). This aligns with the enterprise goal of minimizing undetected fraudulent claims, regardless of a slight improve in false positives. The tradeoff is enough: lowering the brink will increase the mannequin’s sensitivity (increased recall) however results in extra false positives (decrease precision). Nevertheless, the total accuracy of the mannequin is barely barely impacted (from 0.725 to 0.716), reflecting the broader tradeoff between recall and precision that always accompanies threshold changes in imbalanced datasets like fraud detection.

    Bias metrics report for de-biased mannequin (Disparate Impression Remover). Picture compiled by the creator

    The equality of alternative bias metrics present minimal affect after adjusting the brink to 0.03. The Equality of alternative distinction stays inside the truthful interval at -0.070, indicating that the mannequin nonetheless supplies equal probabilities of being appropriately labeled for each teams. The False optimistic charge distinction of -0.041 and the Common odds distinction of -0.056 each keep inside the acceptable vary, suggesting no important bias favoring one group over the opposite. The Accuracy distinction of 0.032 additionally stays small, confirming that the mannequin’s total accuracy shouldn’t be disproportionately affected by the brink adjustment. These outcomes reveal that the equity of the mannequin, by way of equality of alternative, is well-maintained even with the brink change.

    Furthermore, adjusting the chance threshold is important when working with imbalanced datasets similar to fraud detection. The distribution of predicted possibilities will change with every mitigation technique utilized, and thresholds needs to be reviewed and tailored accordingly to steadiness each efficiency and equity, in addition to different dimensions not thought of on this article (e.g., explainability or privateness). The selection of threshold can considerably affect the mannequin’s habits, and last selections needs to be fastidiously adjusted based mostly on enterprise wants.

    In conclusion, the Disparate affect remover with a threshold of 0.03 gives an affordable compromise, enhancing recall for fraudulent claims whereas sustaining equity in equality of alternative metrics. This technique aligns with each enterprise aims and equity issues, making it a viable method for mitigating bias in fraud detection fashions.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhat is a Machine Learning System Design | by Vandana Jada | Jan, 2025
    Next Article Turn Your Love of Travel into a Successful Business with Cruise Planners
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    July 1, 2025
    Artificial Intelligence

    Prescriptive Modeling Makes Causal Bets – Whether You Know it or Not!

    June 30, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    How to Develop a Spam Classification Model with Machine Learning | by Busra Coban | Apr, 2025

    April 1, 2025

    Are You Sure Your Posterior Makes Sense?

    April 12, 2025

    Circuit Tracing: A Step Closer to Understanding Large Language Models

    April 9, 2025
    Our Picks

    🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025

    July 1, 2025

    Futurwise: Unlock 25% Off Futurwise Today

    July 1, 2025

    3D Printer Breaks Kickstarter Record, Raises Over $46M

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.