Close Menu
    Trending
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Optimizing Causal Decisions with Gurobi Machine Learning: A Step-by-Step Tutorial | by Yuji Isobe | Apr, 2025
    Machine Learning

    Optimizing Causal Decisions with Gurobi Machine Learning: A Step-by-Step Tutorial | by Yuji Isobe | Apr, 2025

    Team_AIBS NewsBy Team_AIBS NewsApril 17, 2025No Comments16 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Have you ever ever constructed a machine studying mannequin to estimate causal results after which puzzled learn how to act on these insights? For instance, say you’ll be able to predict how possible every particular person is to answer a remedy — how do you resolve who ought to get that remedy below restricted assets? This tutorial tackles that drawback by combining machine studying and mathematical optimization. We’ll stroll via a demo from the GitHub venture “Causal Inference Demo with Gurobi Machine Learning” to point out learn how to flip predictions into optimum choices, step-by-step.

    Illustration of utilizing a educated Machine Studying mannequin inside a Mathematical Optimization pipeline to search out the most effective answer.

    Causal inference is about understanding cause-and-effect relationships — for example, how providing an incentive (remedy) causes a change in end result. A standard machine studying method for causal inference is uplift modeling, which splits a inhabitants into teams like Persuadables (those that reply provided that handled) versus Positive Issues (reply regardless), and so forth. Nonetheless, uplift fashions alone don’t let you know what to do when you’ve got constraints (like a restricted finances for incentives). They establish who would possibly reply to remedy, however not which people to deal with once you can’t deal with everybody.

    Basic uplift modeling framework: People fall into classes like “Persuadable” or “Positive Factor” primarily based on whether or not they reply with or with out remedy. This helps gauge remedy impact heterogeneity, nevertheless it doesn’t instantly resolve the choice drawback below constraints.

    Enter mathematical optimization. By formulating the choice as an optimization drawback, we will instantly compute the most effective remedy plan below our constraints. On this tutorial, we use Gurobi’s Machine Studying integration to embed a causal prediction mannequin (a logistic regression) into an optimization mannequin. This method lets us discover, for instance, the optimum set of incentives to supply to maximise the variety of optimistic outcomes, given finances limits.

    Gurobi Machine Learning (gurobi-ml) is a library launched in Gurobi 10.0 (Nov 2022) that permits us to include educated ML fashions into optimization (particularly as mixed-integer programming constraints). In apply, it interprets the predictive mannequin into mathematical expressions that the Gurobi solver can deal with (For sure fashions, like logistic regression, this includes making a piecewise-linear approximation of the mannequin’s prediction perform).

    Within the instance under, we’ll deal with a coverage choice drawback from an economics study. The dataset comes from Thornton (2008), the place people in rural Malawi have been provided random financial incentives to be taught their HIV check outcomes. The query we’ll reply is: Given a restricted incentive finances, which individuals ought to we pay (and the way a lot) to maximise the variety of people who return for his or her HIV outcomes? By the top, you’ll see how a machine studying mannequin (predicting the chance somebody returns for his or her outcomes) will be mixed with optimization to yield an optimum incentive allocation technique.

    Let’s dive into the step-by-step implementation.

    First, we import the mandatory libraries and cargo the dataset. We’ll use pandas and NumPy for knowledge dealing with, scikit-learn for our ML mannequin, and Gurobi (with its pandas integration) for optimization. We additionally import Gurobi’s machine studying helper add_predictor_constr to combine the ML mannequin into the optimization mannequin.

    # Import vital packages
    import gurobipy as gp
    import gurobipy_pandas as gppd
    import numpy as np
    import pandas as pd
    from gurobi_ml import add_predictor_constr
    from causaldata import thornton_hiv # dataset
    from sklearn.model_selection import train_test_split
    from sklearn.linear_model import LogisticRegression
    from sklearn.pipeline import make_pipeline
    from sklearn.preprocessing import StandardScaler

    Subsequent, we load the HIV incentives dataset from the causaldata bundle and take a fast look:

    # Load the Thornton (2008) HIV dataset
    knowledge = thornton_hiv.load_pandas().knowledge
    print(knowledge.head())

    This dataset accommodates the next columns:

    • villnum – Village ID
    • bought – Indicator if the particular person bought their HIV check outcomes (that is the result we wish to maximize)
    • distvct – Distance (in km) to the testing middle
    • tinc – Complete incentive quantity provided (in native foreign money)
    • any – Indicator if any incentive was provided (tinc > 0)
    • age – Age of the person
    • hiv2004 – The particular person’s HIV standing consequence (1 if optimistic, 0 if damaging)

    For context, within the unique examine with none incentive, solely about 34% of individuals discovered their HIV standing, however even a small incentive (price ~one-tenth of a day’s wage) doubled that share. In different phrases, incentives have a big effect on the probability of bought=1. Our objective is to use this by allocating a set incentive finances optimally.

    Earlier than modeling, we’ll do a train-test break up on the info. We’ll prepare the ML mannequin on one portion and reserve one other portion to simulate a “deployment” situation the place we resolve incentives for a brand new set of people (the check set). This isn’t strictly vital for optimization, nevertheless it mirrors a sensible state of affairs the place you prepare on historic knowledge after which optimize for a brand new group.

    # Outline function and goal columns
    options = ['tinc', 'distvct', 'age']
    goal = 'bought'
    # Break up knowledge into coaching and check units
    prepare, check = train_test_split(knowledge, test_size=0.2, random_state=0)
    # For optimization, we'll deal with 'tinc' (incentive) as a call variable.
    # Take away the precise 'tinc' and end result 'bought' from the check function set (they are going to be determined/predicted).
    check = check.drop(columns=['tinc', 'got'])
    print(check.form[0], "people in check set.")

    After this step, prepare accommodates the info (together with incentives and outcomes) we’ll use to suit our mannequin, whereas check accommodates the options of people for whom we have to resolve incentives. We dropped tinc and bought from check as a result of in that set tinc will probably be decided by our optimization and bought will probably be predicted by our mannequin. The check set measurement is printed (for instance, 200 people).

    Now we prepare a machine studying mannequin that predicts the chance that a person will get their HIV outcomes (bought=1) given their options. We select a logistic regression mannequin, because it’s acceptable for binary outcomes and, importantly, it’s supported by Gurobi’s ML integration. We’ll embrace distvct (distance) and age as predictive options, and tinc (incentive quantity) as effectively, since providing the next incentive ought to enhance the chance of retrieval.

    Earlier than becoming, we scale the options utilizing StandardScaler (a typical apply to enhance logistic regression convergence). We’ll create a scikit-learn pipeline for comfort.

    To realize perception into how Gurobi-ML works, we instrument the scaler and logistic regression lessons to print messages at any time when their attributes or strategies are accessed. This can be a neat trick to peek below the hood later. (We subclass StandardScaler and LogisticRegression and override __getattribute__ to log calls.) This isn’t required for performance, however it can allow us to see which mannequin parameters Gurobi reads when formulating the optimization constraints.

    # (Optionally available) Wrap StandardScaler to log attribute entry for demonstration
    class LoggingStandardScaler(StandardScaler):
    def __getattribute__(self, identify):
    attr = tremendous().__getattribute__(identify)
    if callable(attr):
    def new_func(*args, **kwargs):
    print(f'Calling StandardScaler.{identify}()')
    return attr(*args, **kwargs)
    return new_func
    else:
    print(f'Accessing StandardScaler.{identify} attribute')
    return attr
    # Wrap LogisticRegression equally
    class LoggingLogisticRegression(LogisticRegression):
    def __getattribute__(self, identify):
    attr = tremendous().__getattribute__(identify)
    if callable(attr):
    def new_func(*args, **kwargs):
    print(f'Calling LogisticRegression.{identify}()')
    return attr(*args, **kwargs)
    return new_func
    else:
    print(f'Accessing LogisticRegression.{identify} attribute')
    return attr
    # Use the wrapped lessons for transparency
    scaler = LoggingStandardScaler()
    logreg = LoggingLogisticRegression(random_state=1)

    Now we arrange the pipeline with our logging scaler and logistic regressor, and match it to the coaching knowledge:

    # Create a pipeline and prepare the mannequin on the coaching set
    pipe = make_pipeline(scaler, logreg)
    pipe.match(X=prepare[features], y=prepare[target])

    After becoming, we’ve a educated logistic mannequin pipe that may predict the chance bought=1 for an individual, given their incentive tinc, distance, and age. This mannequin will function our predictive element within the optimization.

    Technical notice: On this dataset, the logistic regression will be taught, for instance, that larger incentives (tinc) enhance the chance of uptake (bought), whereas larger distance (distvct) possible decreases it (because it’s tougher to journey to get outcomes), and maybe age has some impact. We gained’t deal with the precise mannequin coefficients right here, however reasonably on learn how to use the mannequin in optimization.

    With the predictive mannequin in hand, we flip to formulating the optimization drawback. The objective is to decide on incentive quantities for every particular person in our check set to maximise the entire quantity of people that get their outcomes, topic to a finances constraint.

    Let’s denote:

    • x_i as the inducement quantity we give to particular person i (our choice variables, equivalent to tinc).
    • y_i as the anticipated chance that particular person i will get their consequence (which is output by our ML mannequin given x_i, distance, and age).

    We wish to maximize the sum of y_i (anticipated quantity of people that get outcomes). The constraints are: (1) we’ve a restricted complete finances for incentives, and (2) every particular person’s incentive is capped at a most (within the examine, incentives have been at most 3 models of foreign money).

    In equation type, our optimization mannequin is:

    the place Bis the entire incentive finances and f(...) is the prediction perform given by our logistic regression mannequin (mapping incentive, distance, age to the chance of bought=1).

    Now, let’s implement this with Gurobi. We’ll create a brand new optimization mannequin and add the choice variables:

    # Create a brand new Gurobi mannequin
    m = gp.Mannequin()
    # Add a call variable y_i for every check occasion to signify chance of end result (bought=1)
    y = gppd.add_vars(m, check, identify="chance")
    # Add a call variable x_i (incentive) for every check occasion, with bounds 0 <= x_i <= 3
    check = check.gppd.add_vars(m, lb=0.0, ub=3.0, identify="tinc")
    x = check["tinc"]
    # Make sure the DataFrame `check` now has columns [tinc, distvct, age] within the appropriate order
    check = check[["tinc", "distvct", "age"]]
    print(check.head())

    The gppd.add_vars utility from gurobipy_pandas helps create variables aligned with the indices of our check DataFrame. We first added y variables (one per particular person) after which added a brand new column "tinc" of variables to the check DataFrame for the incentives. After this, check now contains the tinc variable column together with every particular person’s fastened options (distvct and age).

    Printing check.head() would present one thing like:

    tinc                    distvct      age
    993 2.144576 30.0
    859 3.905001 25.0
    298 2.306510 33.6
    553 0.725098 23.0
    672 3.821342 50.0

    Right here tinc entries are Gurobi choice variables (initially awaiting mannequin replace), and distvct and age are the info for these people. The indices (993, 859, …) come from the unique dataset’s indexing.

    Now we add the target and finances constraint. Let’s say the finances $B$ is 0.2 occasions the variety of folks (this was talked about as 0.2n within the venture, which means on common we will spend 0.2 models per particular person – if $n=200$, $B=40$). We’ll compute that and add the constraint $sum_i x_i le B$:

    # Set the entire finances B as 0.2 * variety of check people
    finances = 0.2 * check.form[0]
    # Set goal: maximize sum of y_i chances
    m.setObjective(y.sum(), gp.GRB.MAXIMIZE)
    # Add finances constraint: complete incentive sum <= finances
    m.addConstr(x.sum() <= finances, identify="finances")
    m.replace()

    At this level, we’ve an optimization mannequin with:

    • one steady variable x_i per particular person (incentive quantity),
    • one steady variable y_iper particular person (predicted end result chance),
    • an goal to maximise sum(y_i),
    • and a linear constraint on sum(x_i).

    What’s lacking is the hyperlink between x_i and y_i — i.e., the constraints that pressure y_i to equal the logistic mannequin’s prediction given x_i, distance, and age. We deal with that subsequent.

    Gurobi’s add_predictor_constr perform is the important thing to integrating our educated pipeline pipe into the optimization mannequin. It can add all vital constraints to narrate the enter variables (x, distvct, age) to the output variable (y) in accordance with the machine studying mannequin’s equations.

    # Add constraints from the educated ML mannequin (pipeline) to hyperlink x, distvct, age to predicted y
    pred_constr = add_predictor_constr(m, pipe, check, y, output_type="probability_1")
    pred_constr.print_stats()

    A few issues to notice within the name above:

    • We cross pipe (our educated sklearn pipeline) and check (the DataFrame of enter variables: now containing Gurobi vars for tinc and the fastened distvct, age values).
    • We additionally cross y (the Gurobi vars for the outputs).
    • output_type="probability_1" specifies that for a binary classifier, we wish the constraint to provide the chance of sophistication 1 (in our case the chance bought=1). Gurobi-ML helps getting both uncooked prediction, class, or chance; right here we want the chance.

    When add_predictor_constr runs, it successfully creates a bunch of latest inner choice variables and constraints that signify the computations of the pipeline. This contains the scaling transformation and the logistic regression piecewise linear approximation. We used our logging subclasses, so throughout this course of you’d see messages like:

    Accessing StandardScaler.scale_ attribute  
    Accessing StandardScaler.mean_ attribute
    Accessing LogisticRegression.coef_ attribute
    Accessing LogisticRegression.intercept_ attribute

    These point out Gurobi-ML is studying the educated mannequin parameters to formulate the constraints.

    The pred_constr.print_stats() name then prints a abstract of the added mannequin:

    Mannequin for pipe1:
    1200 variables
    800 constraints
    200 normal constraints
    Enter has form (200, 3)
    Output has form (200, 1)
    Pipeline has 2 steps:
    -----------------------------------------------------
    Step Output Form Variables Constraints Linear Quadratic Common
    =====================================================
    std_scaler1 (200, 3) 1000 600 0 0 0
    log_reg1 (200, 1) 200 200 0 0 200
    -----------------------------------------------------

    What does this imply? The pipeline had 2 steps (the scaler and the logistic regressor):

    • The scaler step launched 1000 new variables and 600 constraints (these come from how scaling is utilized to every of the three options for 200 knowledge factors : primarily linear equations implementing
      z_i_dist = (distvct_i - mean_dist) / sd_dist,
      and equally for
      age).
    • The logistic regression step launched 200 variables and 200 normal constraints. These “normal constraints” are Gurobi’s approach of dealing with the piecewise‑linear approximation of the logistic perform (since a logistic curve isn’t linear, it’s approximated by SOS2 constraints or related). In complete, 1200 additional variables and 1,000 constraints have been added to signify the entire pipeline throughout 200 people.

    The result’s an optimization mannequin that absolutely encodes the connection
    y_i ≈ f(x_i, distvct_i, age_i)
    as outlined by our educated ML mannequin. Now we will let Gurobi do the heavy lifting to search out the optimum incentive allocation.

    With every part in place, we optimize the mannequin:

    # Optimize the mannequin
    m.optimize()

    When you run this, Gurobi will iterate and discover the optimum answer. The solver log (truncated) would possibly seem like:

    Optimize a mannequin with 801 rows, 1600 columns and 2200 nonzeros
    Mannequin has 200 normal constraints
    ...
    Presolved mannequin has 200 SOS constraint(s)
    Root leisure: goal 5.941394e+01
    ...
    Resolution rely 1: 59.4139
    Optimum answer discovered (tolerance 1.00e-4)
    Warning: max constraint violation (6.53e-03) exceeds tolerance

    The optimum goal worth is about 59.4139. Since our goal was the sum of predicted chances, this means the mannequin expects about 59 out of the 200 folks within the check set to get their outcomes below the optimum incentive allocation. (For comparability, if no incentives got, the logistic mannequin would predict a a lot decrease quantity — recall solely ~34% would possibly observe up with no incentive on common, which might be ~68 folks; our finances isn’t sufficient to deal with everybody, nevertheless it tries to maximise that rely.)

    The warnings about constraint violation (~6.5e-3) are because of the piecewise linear approximation of the logistic perform. Primarily, the $y_i$ values is likely to be off by as much as 0.0065 (about 0.65%) from the “true” logistic curve. This can be a minor approximation error. We are able to test the utmost approximation error instantly:

    # Examine the utmost approximation error within the logistic regression constraint
    max_error = np.max(pred_constr.get_error())
    print(f"Most error in approximating the regression {max_error:.6f}")

    This prints one thing like Most error in approximating the regression 0.006531, confirming the ~0.0065 max deviation. This error is small, but when wanted, one might tighten the approximation (e.g., by adjusting Gurobi parameters for piecewise linear features).

    Now, let’s retrieve the optimized incentive values x_i. We anticipate lots of them to be zero (not everybody will get paid) and a few to be on the most of three (the most effective use of finances for essentially the most “persuadable” people). We additionally guarantee none are damaging (they shouldn’t be, however because of the tiny approximation, some variables would possibly come out as very small damaging numbers like -1e-6, which we will deal with as 0):

    # Get the optimized incentive values and spherical/ground small numerical artifacts
    tinc_solution = pred_constr.input_values['tinc'] # pandas Collection of optimum x_i
    tinc_solution = tinc_solution.apply(lambda x: 0 if x < 0 else np.ground(x * 1e5) / 1e5)
    print(tinc_solution.head())

    After this, tinc_solution holds every particular person’s incentive within the optimum plan (rounded down to five decimal locations to scrub up float quirks). Lastly, let’s confirm that the finances constraint is glad:

    print(tinc_solution.sum() <= finances)  # This could output True

    Certainly, the entire sum of incentives used will probably be throughout the finances (possible precisely equal or very near it, since we anticipate the finances to be absolutely utilized for optimum goal).

    At this level, one might additional analyze the answer — for instance, how many individuals bought the max incentive versus none, or which traits made somebody extra prone to obtain an incentive. These with reasonable distances and ages that made them “persuadable” possible bought the inducement, whereas these very shut (who would possibly go anyway) or very far (who won’t go even with incentive) won’t get any within the optimum plan. Such evaluation will be carried out by analyzing tinc_solution values and the corresponding options.

    For our functions, we’ve demonstrated the core concept: utilizing Gurobi to optimally allocate remedy primarily based on a discovered causal mannequin. We’ve successfully solved a constrained uplift optimization drawback — one thing uplift modeling alone can not do instantly.

    The combination of machine studying predictions and mathematical optimization reveals nice promise for causal decision-making in particular use circumstances. Conventional uplift modeling can establish who’s influenced by a remedy, nevertheless it doesn’t let you know learn how to optimally deploy that remedy below real-world constraints (e.g., restricted finances, capability, and so forth.). In our instance, as an alternative of simply predicting who would reply to an incentive, we formulated an optimization to resolve precisely who ought to obtain an incentive to maximise total success. This addresses questions like, “Who ought to we deal with given we will solely afford to deal with X folks?” which uplift fashions alone go away unanswered.

    By leveraging Gurobi Machine Studying, we have been in a position to incorporate a posh prediction (logistic regression) instantly into an optimization mannequin. The solver then found out the most effective remedy plan — one thing like deciding which college students to supply scholarships to, or which sufferers to present an intervention — whereas respecting constraints. This method supplies a extra complete answer to causal inference questions that contain decision-making, past simply evaluation.

    To sum up, this tutorial demonstrated learn how to flip predictive insights into optimum choices. We confirmed how a causal inference drawback (maximizing outcomes below constraints) will be solved by embedding an ML mannequin into an optimization drawback. This method will be utilized in lots of domains: advertising (who to focus on for a marketing campaign), healthcare (who to deal with or display), economics (who to subsidize), and past. As instruments like Gurobi Machine Studying mature, they open the door for knowledge scientists and operations researchers to collaborate in fixing choice issues that lie on the intersection of AI and optimization.

    The total code for this demo is out there within the venture’s repository, and we encourage you to discover it and experiment with your personal situations. With a little bit of modeling creativity, you can begin answering “What ought to we do?” — not simply “What can we predict?” — utilizing the facility of optimization.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFoundation EGI Launches Engineering Platform
    Next Article The Good-Enough Truth | Towards Data Science
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025
    Machine Learning

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025
    Machine Learning

    🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    OpenAI Says DeepSeek May Have Improperly Harvested Its Data

    January 29, 2025

    President Donald Trump Says TikTok ‘Will Be Protected’ in US

    May 6, 2025

    Blackstone Considers Small Investment in TikTok

    March 29, 2025
    Our Picks

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025

    Musk’s X appoints ‘king of virality’ in bid to boost growth

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.