Close Menu
    Trending
    • He Went From $471K in Debt to Teaching Others How to Succeed
    • An Introduction to Remote Model Context Protocol Servers
    • Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025
    • AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?
    • Why Your Finance Team Needs an AI Strategy, Now
    • How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1
    • From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Optimizing Multi-Objective Problems with Desirability Functions
    Artificial Intelligence

    Optimizing Multi-Objective Problems with Desirability Functions

    Team_AIBS NewsBy Team_AIBS NewsMay 21, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    in Data Science, it isn’t unusual to come across issues with competing aims. Whether or not designing merchandise, tuning algorithms or optimizing portfolios, we regularly have to steadiness a number of metrics to get the absolute best consequence. Generally, maximizing one metrics comes on the expense of one other, making it exhausting to have an total optimized answer.

    Whereas a number of options exist to resolve multi-objective Optimization issues, I discovered desirability operate to be each elegant and straightforward to elucidate to non-technical viewers. Which makes them an fascinating possibility to think about. Desirability capabilities will mix a number of metrics right into a standardized rating, permitting for a holistic optimization.

    On this article, we’ll discover:

    • The mathematical basis of desirability capabilities
    • How you can implement these capabilities in Python
    • How you can optimize a multi-objective drawback with desirability capabilities
    • Visualization for interpretation and clarification of the outcomes

    To floor these ideas in an actual instance, we’ll apply desirability capabilities to optimize a bread baking: a toy drawback with a couple of, interconnected parameters and competing high quality aims that may enable us to discover a number of optimization decisions.

    By the tip of this text, you’ll have a strong new instrument in your information science toolkit for tackling multi-objective optimization issues throughout quite a few domains, in addition to a completely useful code obtainable right here on GitHub.

    What are Desirability Features?

    Desirability capabilities have been first formalized by Harrington (1965) and later prolonged by Derringer and Suich (1980). The thought is to:

    • Rework every response right into a efficiency rating between 0 (completely unacceptable) and 1 (the best worth)
    • Mix all scores right into a single metric to maximise

    Let’s discover the kinds of desirability capabilities after which how we will mix all of the scores.

    The several types of desirability capabilities

    There are three totally different desirability capabilities, that will enable to deal with many conditions.

    • Smaller-is-better: Used when minimizing a response is fascinating
    def desirability_smaller_is_better(x: float, x_min: float, x_max: float) -> float:
        """Calculate desirability operate worth the place smaller values are higher.
    
        Args:
            x: Enter parameter worth
            x_min: Minimal acceptable worth
            x_max: Most acceptable worth
    
        Returns:
            Desirability rating between 0 and 1
        """
        if x <= x_min:
            return 1.0
        elif x >= x_max:
            return 0.0
        else:
            return (x_max - x) / (x_max - x_min)
    • Bigger-is-better: Used when maximizing a response is fascinating
    def desirability_larger_is_better(x: float, x_min: float, x_max: float) -> float:
        """Calculate desirability operate worth the place bigger values are higher.
    
        Args:
            x: Enter parameter worth
            x_min: Minimal acceptable worth
            x_max: Most acceptable worth
    
        Returns:
            Desirability rating between 0 and 1
        """
        if x <= x_min:
            return 0.0
        elif x >= x_max:
            return 1.0
        else:
            return (x - x_min) / (x_max - x_min)
    • Goal-is-best: Used when a selected goal worth is perfect
    def desirability_target_is_best(x: float, x_min: float, x_target: float, x_max: float) -> float:
        """Calculate two-sided desirability operate worth with goal worth.
    
        Args:
            x: Enter parameter worth
            x_min: Minimal acceptable worth
            x_target: Goal (optimum) worth
            x_max: Most acceptable worth
    
        Returns:
            Desirability rating between 0 and 1
        """
        if x_min <= x <= x_target:
            return (x - x_min) / (x_target - x_min)
        elif x_target < x <= x_max:
            return (x_max - x) / (x_max - x_target)
        else:
            return 0.0

    Each enter parameter could be parameterized with one in every of these three desirability capabilities, earlier than combining them right into a single desirability rating.

    Combining Desirability Scores

    As soon as particular person metrics are reworked into desirability scores, they must be mixed into an total desirability. The most typical method is the geometric imply:

    The place di are particular person desirability values and wi are weights reflecting the relative significance of every metric.

    The geometric imply has an vital property: if any single desirability is 0 (i.e. utterly unacceptable), the general desirability can also be 0, no matter different values. This enforces that every one necessities have to be met to some extent.

    def overall_desirability(desirabilities, weights=None):
        """Compute total desirability utilizing geometric imply
        
        Parameters:
        -----------
        desirabilities : checklist
            Particular person desirability scores
        weights : checklist
            Weights for every desirability
            
        Returns:
        --------
        float
            Total desirability rating
        """
        if weights is None:
            weights = [1] * len(desirabilities)
            
        # Convert to numpy arrays
        d = np.array(desirabilities)
        w = np.array(weights)
        
        # Calculate geometric imply
        return np.prod(d ** w) ** (1 / np.sum(w))

    The weights are hyperparameters that give leverage on the ultimate consequence and provides room for personalisation.

    A Sensible Optimization Instance: Bread Baking

    To display desirability capabilities in motion, let’s apply them to a toy drawback: a bread baking optimization drawback.

    The Parameters and High quality Metrics

    Let’s play with the next parameters:

    1. Fermentation Time (30–180 minutes)
    2. Fermentation Temperature (20–30°C)
    3. Hydration Degree (60–85%)
    4. Kneading Time (0–20 minutes)
    5. Baking Temperature (180–250°C)

    And let’s attempt to optimize these metrics:

    1. Texture High quality: The feel of the bread
    2. Taste Profile: The flavour of the bread
    3. Practicality: The practicality of the entire course of

    In fact, every of those metrics is dependent upon multiple parameter. So right here comes one of the crucial crucial steps: mapping parameters to high quality metrics. 

    For every high quality metric, we have to outline how parameters affect it:

    def compute_flavor_profile(params: Listing[float]) -> float:
        """Compute taste profile rating primarily based on enter parameters.
    
        Args:
            params: Listing of parameter values [fermentation_time, ferment_temp, hydration,
                   kneading_time, baking_temp]
    
        Returns:
            Weighted taste profile rating between 0 and 1
        """
        # Taste primarily affected by fermentation parameters
        fermentation_d = desirability_larger_is_better(params[0], 30, 180)
        ferment_temp_d = desirability_target_is_best(params[1], 20, 24, 28)
        hydration_d = desirability_target_is_best(params[2], 65, 75, 85)
    
        # Baking temperature has minimal impact on taste
        weights = [0.5, 0.3, 0.2]
        return np.common([fermentation_d, ferment_temp_d, hydration_d],
                          weights=weights)

    Right here for instance, the flavour is influenced by the next:

    • The fermentation time, with a minimal desirability under half-hour and a most desirability above 180 minutes
    • The fermentation temperature, with a most desirability peaking at 24 levels Celsius
    • The hydration, with a most desirability peaking at 75% humidity

    These computed parameters are then weighted averaged to return the flavour desirability. Related computations and made for the feel high quality and practicality.

    The Goal Operate

    Following the desirability operate method, we’ll use the general desirability as our goal operate. The purpose is to maximise this total rating, which implies discovering parameters that greatest fulfill all our three necessities concurrently:

    def objective_function(params: Listing[float], weights: Listing[float]) -> float:
        """Compute total desirability rating primarily based on particular person high quality metrics.
    
        Args:
            params: Listing of parameter values
            weights: Weights for texture, taste and practicality scores
    
        Returns:
            Unfavourable total desirability rating (for minimization)
        """
        # Compute particular person desirability scores
        texture = compute_texture_quality(params)
        taste = compute_flavor_profile(params)
        practicality = compute_practicality(params)
    
        # Guarantee weights sum as much as one
        weights = np.array(weights) / np.sum(weights)
    
        # Calculate total desirability utilizing geometric imply
        overall_d = overall_desirability([texture, flavor, practicality], weights)
    
        # Return adverse worth since we need to maximize desirability
        # however optimization capabilities usually reduce
        return -overall_d

    After computing the person desirabilities for texture, taste and practicality; the general desirability is just computed with a weighted geometric imply. It lastly returns the adverse total desirability, in order that it may be minimized.

    Optimization with SciPy

    We lastly use SciPy’s reduce operate to search out optimum parameters. Since we returned the adverse total desirability as the target operate, minimizing it might maximize the general desirability:

    def optimize(weights: checklist[float]) -> checklist[float]:
        # Outline parameter bounds
        bounds = {
            'fermentation_time': (1, 24),
            'fermentation_temp': (20, 30),
            'hydration_level': (60, 85),
            'kneading_time': (0, 20),
            'baking_temp': (180, 250)
        }
    
        # Preliminary guess (center of bounds)
        x0 = [(b[0] + b[1]) / 2 for b in bounds.values()]
    
        # Run optimization
        consequence = reduce(
            objective_function,
            x0,
            args=(weights,),
            bounds=checklist(bounds.values()),
            technique='SLSQP'
        )
    
        return consequence.x

    On this operate, after defining the bounds for every parameter, the preliminary guess is computed as the center of bounds, after which given as enter to the reduce operate of SciPy. The result’s lastly returned. 

    The weights are given as enter to the optimizer too, and are a great way to customise the output. For instance, with a bigger weight on practicality, the optimized answer will concentrate on practicality over taste and texture.

    Let’s now visualize the outcomes for a couple of units of weights.

    Visualization of Outcomes

    Let’s see how the optimizer handles totally different desire profiles, demonstrating the flexibleness of desirability capabilities, given numerous enter weights.

    Let’s take a look on the leads to case of weights favoring practicality:

    Optimized parameters with weights favoring practicality. Picture by writer.

    With weights largely in favor of practicality, the achieved total desirability is 0.69, with a brief kneading time of 5 minutes, since a excessive worth impacts negatively the practicality.

    Now, if we optimize with an emphasis on texture, we’ve got barely totally different outcomes:

    Optimized parameters with weights favoring texture. Picture by writer.

    On this case, the achieved total desirability is 0.85, considerably larger. The kneading time is that this time 12 minutes, as a better worth impacts positively the feel and isn’t penalized a lot due to practicality. 

    Conclusion: Sensible Purposes of Desirability Features

    Whereas we centered on bread baking as our instance, the identical method could be utilized to varied domains, resembling product formulation in cosmetics or useful resource allocation in portfolio optimization.

    Desirability capabilities present a strong mathematical framework for tackling multi-objective optimization issues throughout quite a few information science purposes. By reworking uncooked metrics into standardized desirability scores, we will successfully mix and optimize disparate aims.

    The important thing benefits of this method embody:

    • Standardized scales that make totally different metrics comparable and straightforward to mix right into a single goal
    • Flexibility to deal with several types of aims: reduce, maximize, goal
    • Clear communication of preferences via mathematical capabilities

    The code offered right here supplies a place to begin to your personal experimentation. Whether or not you’re optimizing industrial processes, machine studying fashions, or product formulations, hopefully desirability capabilities supply a scientific method to discovering one of the best compromise amongst competing aims.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article7 Must-Read Books for Data Science | by Analyst Uttam | May, 2025
    Next Article Skims Boss Emma Grede: Here Are My Tips for Business Success
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    An Introduction to Remote Model Context Protocol Servers

    July 2, 2025
    Artificial Intelligence

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025
    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    He Went From $471K in Debt to Teaching Others How to Succeed

    July 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Raspberry Pi AI Camera ile İlk adımlar | by NasuhcaN | Apr, 2025

    April 24, 2025

    Deep Learning for Echocardiogram Interpretation

    March 18, 2025

    Google Co‑Scientist: Where AI Meets Discovery | by U.V. | Feb, 2025

    February 22, 2025
    Our Picks

    He Went From $471K in Debt to Teaching Others How to Succeed

    July 2, 2025

    An Introduction to Remote Model Context Protocol Servers

    July 2, 2025

    Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025

    July 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.