Close Menu
    Trending
    • Revisiting Benchmarking of Tabular Reinforcement Learning Methods
    • Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025
    • Qantas data breach to impact 6 million airline customers
    • He Went From $471K in Debt to Teaching Others How to Succeed
    • An Introduction to Remote Model Context Protocol Servers
    • Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025
    • AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?
    • Why Your Finance Team Needs an AI Strategy, Now
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Differentially Private Gradient Flow based on the Sliced Wasserstein Distance | by Criteo R&D | Mar, 2025
    Machine Learning

    Differentially Private Gradient Flow based on the Sliced Wasserstein Distance | by Criteo R&D | Mar, 2025

    Team_AIBS NewsBy Team_AIBS NewsMarch 5, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Novel differentially non-public mannequin utilizing gradient flows outlined on an optimum transport metric.

    Criteo Tech Blog

    This Analysis Card introduces a novel, theoretically grounded technique for differentially non-public generative modeling by leveraging a easy mathematical course of, reaching high-fidelity knowledge era with robust privateness ensures and decrease computational prices in comparison with conventional approaches.

    Picture by Richard Horvath on Unsplash
    • Title: Differentially Private Gradient Flow based on the Sliced Wasserstein Distance
    • Quick Title: Novel differentially non-public mannequin utilizing gradient flows outlined on an optimum transport metric.
    • Authors: Ilana SEBAG, Muni Sreenivas Pydi, Jean-Yves Franceschi, Alain Rakotomamonjy, Mike Gartrell, Jamal Atif, Alexandre Allauzen
    • Workforce : RSC.FDL. Collaboration with : Miles Workforce, LAMSADE, Université Paris-Dauphine, PSL College, CNRS and ESPCI PSL.
    • Standing : Revealed at TMLR (01/2025)
    • Class : Privateness, Generative Modeling, gradient flows.

    Safeguarding knowledge has turn into important on this period of widespread AI adoption. Generative modeling, particularly, poses distinctive challenges as a result of its skill to be taught and replicate intricate knowledge distributions, which dangers exposing delicate data from the unique dataset if the mannequin is educated with none privateness part. Whereas current approaches like including noise to the gradient (DP-SGD) or utilizing differentially non-public losses for generator-based strategies are efficient, they face limitations in balancing three key features:

    • Privateness (How effectively is the info protected against privateness assaults?),
    • Constancy (How real looking and high-quality is the info generated by the mannequin?),
    • Computational effectivity (How a lot computation and assets are required to coach the mannequin?).

    By introducing a novel differentially non-public algorithm primarily based on gradient flows and the Gaussian-smoothed Sliced Wasserstein Distance, we purpose to decrease knowledge leakage whereas reaching high-fidelity knowledge era beneath low privateness budgets and lowered computational prices. This principled different addresses unexplored areas in privacy-preserving generative modeling, advancing the sphere towards extra accountable AI growth.

    On this work, we current a novel theoretical framework for a differentially non-public gradient circulate of the sliced Wasserstein distance (SWD), which has not been explored beforehand as a technique to assure differential privateness for generative AI fashions. Our method includes defining the gradient circulate on the smoothed SWD. Though the Gaussian smoothing technique seems easy, it introduces important theoretical challenges, notably relating to the existence and regularity of the gradient circulate resolution.

    To handle these complexities, we set up the “continuity equation” for our new gradient circulate of the smoothed SWD, leading to a smoothed velocity discipline that governs how the generated knowledge is privately produced. This permits us to discretize the continuity equation from the earlier step, right into a Stochastic Differential Equation (SDE) that ensures the gradient circulate maintains differential privateness. Notably, we present that after discretization, the smoothing course of within the drift time period capabilities as a Gaussian mechanism, guaranteeing that the privateness finances is rigorously tracked all through the method.

    On the theoretical entrance, our contribution is important as we show, for the primary time within the literature, the existence and regularity of the gradient circulate for the Gaussian-smoothed Sliced Wasserstein distance (GSW). The proof methods we use, impressed by earlier works, require intensive modification to deal with the distinctive traits of the GSW. This novel theoretical end result lays the muse for future work on differential privateness gradient flows, opening the door to new prospects and enhancements in privateness preserving AI.

    From an experimental standpoint, we present that our proposed method outperforms the baseline DPSWgen mannequin, which makes use of a generator-based structure with the differentially non-public Sliced Wasserstein loss, throughout varied privateness budgets (ranges of privateness). Our technique not solely achieves higher FID scores but in addition generates higher-quality samples, demonstrating the sensible viability and superior efficiency of our method in safeguarding privateness whereas producing high-fidelity generative fashions.

    FID outcomes for every baseline, dataset and privateness setting, averaged over 5 era runs.
    Generated photographs from DPSWflow-r (higher row) and DPSWgen (decrease row) for MNIST, FashionMNIST, and Celeba with no DP: ε = ∞
    Generated photographs from DPSWflow-r (higher row) and DPSWgen (decrease row) for MNIST, FashionMNIST, and Celeba with DP: ε = 10.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleExploring the Role of Altcoins in the Metaverse and Web3
    Next Article Why Entrepreneurs Who Invest Locally Grow Their Businesses Faster
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025

    July 2, 2025
    Machine Learning

    Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025

    July 2, 2025
    Machine Learning

    From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    I won’t put my relationships on social media again

    April 19, 2025

    The people refusing to use AI

    May 6, 2025

    AI in Recruiting: What to Expect in the Next 5 Years

    December 24, 2024
    Our Picks

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025

    Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025

    July 2, 2025

    Qantas data breach to impact 6 million airline customers

    July 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.