Close Menu
    Trending
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Introducing Hoeffding’s Inequality for creating Storage-less Decision Trees
    Artificial Intelligence

    Introducing Hoeffding’s Inequality for creating Storage-less Decision Trees

    Team_AIBS NewsBy Team_AIBS NewsFebruary 5, 2025No Comments17 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The Process

    Think about that you’ve an enormous labeled dataset and also you need to construct a mannequin for a prediction activity. This may be Twitter, for instance, the place you might have extra Tweets (the options) than you may rely together with the corresponding variety of likes (labels). Now you need to construct a mannequin that may predict whether or not a tweet will probably be preferred greater than 100 occasions or not, educated on all tweets written within the final 12 months, i.e. we need to resolve a classification activity. A label of 1 implies that the tweet has greater than 100 likes, 0 in any other case.

    Photo by Sara Kurfeß on Unsplash
    Photograph by Sara Kurfeß on Unsplash

    You resolve to make use of a Decision Tree and even one thing sensible derived from it, like a Random Forest or Gradient Boosting. However Choice Timber, fashions primarily based on them, and even different fashions share the next drawback:

    You want the coaching information available in reminiscence.

    The Drawback

    In the perfect case, the whole coaching information matches into your native machine’s reminiscence. Nevertheless, you understand that the tweets dataset is bigger than 8–32GB, so you might be out of luck. Possibly you might have entry to a cluster with 512GB of RAM, however this isn’t giant sufficient, too.


    How giant is the dataset really? Allow us to do a tough estimation. Twitter itself and a number of other different sources ([here](https://www.dsayce.com/social-media/tweets-day/) and right here) report that there are round 6,000 Tweets per second. This implies round 6,000 60² 24 365 = 189,216,000,000‬ tweets per 12 months. Allow us to assume that every tweet has a dimension of 140 bytes, one byte for every character within the bag-of-words encoding. We ignore that every tweet may need some metadata hooked up and that we may additionally use bigrams, trigrams, and so forth. It is a whopping 140 189,216,000,000‬ / 10⁹= 26,490 GB of tweet information for a single 12 months!

    So, utilizing RAM is just not an choice. And even in the event you occur to personal a tough disk large enough to suit the entire dataset, studying the info from it will make the coaching awfully gradual, as you may see here (Figure 3).

    Properly, what to do?

    The Resolution

    Pedro Domingos and Geoff Hulten from the Division of Pc Science & Engineering of the College of Washington launched a variant of Choice Timber – known as Hoeffding Timber [1] – that can be utilized in a streaming trend. Because of this we’ve to parse the big coaching dataset solely as soon as and construct the tree alongside the way in which.

    We don’t even need to retailer the samples used: We are able to seize them immediately from Twitter (or any giant database), course of them by rising some counters and neglect about them once more. Why that is potential may be defined utilizing Hoeffding’s Inequality, giving the Hoeffding Timber their identify.

    The high-level concept is that we shouldn’t have to take a look at all of the samples, however solely at a sufficiently giant random subset at every splitting level within the Choice Tree algorithm. How giant this subset needs to be is the topic of the subsequent part.

    I’m writing this text since Domingos’ and Hulten’s paper is kind of technical (and due to this fact, additionally exact) and I need to current a high-level, easy-to-understand view of why the authors’ methodology works.


    Additionally, if you don’t want to cope with the maths within the subsequent sections, try the final part for some code a minimum of! There, I’m utilizing the package deal scikit-multiflow to make use of Hoeffding Timber.

    Hoeffding’s Inequality

    Allow us to study what Hoeffding’s Inequality says and the way we will put it to use to resolve the storage downside.

    Introduction

    Wassily Hoeffding, a Finnish statistician and one of many founders of nonparametric statistics, invented (constructing upon the concepts of Herman Chernoff) found an inequality [2] quantifying the next assertion:

    The sum (and likewise imply) of bounded random variables is tightly concentrated round its anticipated worth.

    Take a good coin (chance of seeing heads = 0.5) that we flip 1000 occasions, for instance. Outline random variables X₁, …, X₁₀₀₀ with

    Then the variety of heads – allow us to name it X – is simply the sum of the Xᵢ‘s. We already know that we will count on 500 occasions heads since

    The expectation of the Xᵢ's is the probability of them being equal to 1, which is 0.5 in our case. Furthermore, the expected value is linear, justifying the first equation.
    The expectation of the Xᵢ‘s is the chance of them being equal to 1, which is 0.5 in our case. Moreover, the expected value is linear, justifying the primary equation.

    Now Hoeffding’s Inequality acquired us coated: We additionally know that X is not going to deviate a lot from 500 with excessive chance, see the illustration under. We’ll see what “a lot” and “with excessive chance” imply in a second.

    You can see how concentrated the probability mass is around 500. The probability of seeing exactly 500 times heads is about 0.025.
    You’ll be able to see how concentrated the chance mass is round 500. The chance of seeing precisely 500 occasions heads is about 0.025.

    The simplified Theorem

    Let X₁, …, Xₙ be impartial Bernoulli random variables, i.e. every of them takes the values 0 or 1. Let X = X₁ + … + Xₙ be their sum. Then

    Hoeffding's Inequality for sums.
    Hoeffding’s Inequality for sums.

    Notice: A extra normal method holds, the place the Xᵢ‘s may be bounded between any actual numbers a and b and don’t even need to be stochastically impartial. However we work with the model offered right here.

    Intuitively, the inequality is smart: The bigger t will get, i.e. the bigger we permit the error to turn into, the extra the chance of touchdown contained in the interval or size 2t will increase.

    However the actual attention-grabbing factor is the next: The fitting facet of the equation solely contains the variety of random variables n and the allowed error t. An important factor, the chance of the Xᵢ‘s being equal to 1, is nowhere to be discovered on the right-hand facet of the inequality. The possibilities may be recognized or unknown to us, they are often the identical for all i or not, it simply doesn’t matter. That is what makes this inequality so versatile and nice to work with.

    Different inequalities with this property are the better Markov’s Inequality and Chebyshev’s Inequality, however the decrease bounds on the right-hand facet they offer us are a lot worse. To be truthful, these two inequalities don’t want that our random variables are bounded. Hoeffding’s Inequality, nonetheless, makes use of the truth that the Xᵢ‘s are bounded between 0 and 1.


    Taking the inequality, changing t with nt, and dividing the inequality inside P(…) by n yields the Hoeffding Inequality for the imply:

    Hoeffding's Inequality for the mean.
    Hoeffding’s Inequality for the imply.

    the place

    Functions

    Allow us to see the inequalities in motion now. First, we’ll speak about a really fundamental coin instance. Then we’ll go to the second crucial instance of estimating shares, which we might want to perceive why Hoeffding Timber work.

    Again to the Coin Instance (Sum)

    Let’s say that we need to compute the chance of getting lower than or equal to 450 or better than or equal to 550 heads. Since 𝔼(X) = 500, we will set t_=_50 and find yourself with

    Because of this the chance of deviating lower than 50 from the anticipated worth may be very excessive. The variety of heads is extraordinarily concentrated at round 500.

    Estimating Shares (Imply)

    Assume that there’s a giant pool of N balls, an unknown share p of them being white and the remainder being black. A pure query may be: How giant is the share p of white balls? If the pool dimension N is giant sufficient (consider a number of billion), selecting up each ball and checking it’s not an choice.

    In fact, the pure factor to do is to uniformly pattern a number of – let’s say n – balls with alternative and verify them. If w of them are white, we want to conclude that p≈w/n. However how giant ought to our subsample be? What is an effective dimension for n? Is 10 okay? 100? 1000?

    Properly, it depends upon how a lot deviation from p you might be high-quality with and how a lot confidence you have to be inside this deviation. You’ll be able to set two parameters:

    • an higher certain for the error on p, let’s name it t once more
    • a decrease certain for the chance of being throughout the interval centered round p with size 2t, let’s name it 1-ɛ for a small ɛ>0.

    Because of this t and ɛ are mounted, and we need to discover a quantity n of examples, such that with chance a minimum of 1-ɛ the fraction w/n doesn’t differ from p by greater than t.


    We will formalize this now. Allow us to outline n random variables X₁, …, Xₙ with

    Since we randomly draw balls with alternative, the chance of Xᵢ being 1 is precisely p, the unknown share. That is additionally precisely the anticipated worth of Xᵢ.

    Let X̅=w/n be the imply of those Bernoulli variables once more. Then we even have

    and Hoeffding’s Inequality provides us

    Now we would like the right-hand facet to be bigger than 1-ɛ, which supplies us a decrease certain on X̅=w/n deviating from p by no more than t with a chance of a minimum of _ 1-ɛ._ Thus, we’ve to resolve the inequality

    for n. Utilizing some elementary operations, we get


    If we tolerate an absolute error of at most 1% from p with a chance of a minimum of 99%, we’ve to set t=0.01 and _ɛ=_0.01, giving us a required subsample dimension of a minimum of

    Let’s strive it out in Python. We take into account a pool of 30,000,000 balls, 10,000,000 of them being white (labeled 1) and 20,000,000 being black (labeled 0). So the actual share of white balls is 0.3333… which we faux we don’t know.

    import numpy as np
    np.random.seed(0)
    # Create a pool with 30 million entries, 10 million being a one (white ball) and 20 million being a zero (black ball).
    # Because of this we've a share of 1/3=0.3333... white balls. 
    pool = 200000000 * [0] + 100000000 * [1]
    # Set Hoeffding parameters.
    t = 0.01
    eps = 0.01
    hoeffding_amount = int(np.ceil(np.log(2 / eps) / (2 * t ** 2)))
    subsample = np.random.alternative(pool, dimension=hoeffding_amount, exchange=True)
    print(subsample.imply())
    # Outcome: 0.32975992752529065

    Appears to be like good! The error is just about 0.0035 < 0.01 = t. However the chance for this to occur was a minimum of 99% anyway, so why are we even celebrating? 😉

    Notice that as a substitute of sampling, saving, after which taking the imply of the 26,492 samples, we will merely pattern, enhance a counter each time we’ve drawn a white ball (a one) after which neglect in regards to the pattern once more. This manner we solely need to preserve observe of the counter and the whole variety of balls we checked out, making this a very memory-efficient reminiscence algorithm (logarithmic in our subsample dimension n).


    In complete, we will say {that a} subsample of dimension round 26,500 is sufficient for figuring out the share of white balls with excessive confidence.

    Going again to Machine Studying

    Within the final instance, we’ve seen how you can compute the share of white balls in a big pool of black and white balls with out checking all the pool and with out a big error with good chance.

    We’ll use this information to coach a Choice Tree with out the necessity of getting a considerable amount of coaching information obtainable in reminiscence, the issue we wished to resolve within the first place.

    However first, we’ve to repeat how a Choice Tree is constructed the standard method.

    Constructing a Choice Tree the standard Means

    I can’t go into a lot element right here about how the (higher to say: any) Choice Tree algorithm works. There are various nice movies and articles about that on the internet and right here on Medium.

    But when the whole coaching dataset matches into our reminiscence, normally that is what occurs in step one within the algorithm:

    We calculate a measure of impurity I₁ of the labels of the whole dataset, e.g. the Shannon Entropy for classification duties, which we can even use now.

    The formula of the Shannon Entropy. p is the share of samples labeled 1. If p is 0 or 1, meaning that the samples all have the same label, the impurity is 0. The entropy function (=impurity) takes its maximum when exactly half of the samples are labeled 1 and the other half labeled 0, which corresponds to p=0.5. As you can see, the formula is symmetric, i.e. I(p)=I(1-p), which means that we can also consider the share of black balls instead of white balls.
    The method of the Shannon Entropy. p is the share of samples labeled 1. If p is 0 or 1, which means that the samples all have the identical label, the impurity is 0. The entropy operate (=impurity) takes its most when precisely half of the samples are labeled 1 and the opposite half labeled 0, which corresponds to p=0.5. As you may see, the method is symmetric, i.e. I(p)=I(1-p), which implies that we will additionally take into account the share of black balls as a substitute of white balls.

    Then we take a function f and a minimize level c. Utilizing these, we partition the whole dataset into two disjoint subsets:

    • One with all of the samples the place function f has values smaller (or equal) than c.
    • The opposite one with all of the samples the place function f has values strictly better than c.

    The impurity of those two units is measured once more and mixed right into a weighted common, giving a single impurity measure I₂ for each units. Then the Info Acquire is calculated as G = I₁ – I₂.

    Contemplate the next instance of 1 cut up:

    In this small example, we start with five samples with three features each. We start by measuring the impurity, namely I₁. Then we choose Feature 3 and a cut point of 4 to split the node. Doing this, the original five samples are distributed in the two children nodes. We measure the impurities of both nodes and combine them into a single number I₂. The Information Gain is then computed.
    On this small instance, we begin with 5 samples with three options every. We begin by measuring the impurity, particularly I₁. Then we select Characteristic 3 and a minimize level of 4 to separate the node. Doing this, the unique 5 samples are distributed within the two youngsters nodes. We measure the impurities of each nodes and mix them right into a single quantity I₂. The Info Acquire is then computed.

    That is accomplished for all options and all potential minimize factors and the one with the most important Info Acquire G is chosen as the primary cut up within the tree. We repeat this recursively for all leaves within the tree now till some stopping criterion is met (the impurity of a node is smaller than a threshold, the tree has reached a most depth, there are too few samples in a node, …).

    Eradicating the Reminiscence Requirement

    Now, we’ve to simulate the habits utilizing fewer samples. As we’ve seen for the traditional Choice Tree:

    Solely the share p of samples with label 1 matter to compute the impurity!

    Thankfully, we will approximate this share p and therefore the impurity utilizing a subsample of the unique coaching set, as mentioned within the part about Hoeffding’s Inequality. Labels with label 1 are the white balls, and samples with label 0 are the black balls, within the language of the ball instance.

    Contemplating a single cut up, we will approximate I₁ utilizing solely about 26,500 samples. Nevertheless, for estimating I₂, we’d like 26,500 in every baby node. If we’re fortunate and the samples get cut up evenly into the 2 nodes, 2*26,500=53,000 samples are sufficient to begin with. In any other case, we’d want extra, however so long as we’d like lower than a number of million samples, we’re higher off than earlier than. And even when we’d like one million: since we will stream them into the tree and preserve observe of some counters, we is not going to run into reminiscence points.

    This manner we will safely practice our mannequin, even on each tweet made on Twitter. Glad large-scale coaching!


    If you wish to know the way that is accomplished intimately, please learn the paper [1]. The authors describe their algorithm in pseudo-code with all of the counters needed and so they additionally give proof of how a lot the Hoeffding Tree is constructed by way of streaming the info, and the corresponding regular Choice Tree will deviate.

    Conclusion

    We’ve seen that coaching Choice Timber on extraordinarily giant coaching units is infeasible. Overcoming this downside has taken subsampling the correct amount of coaching samples and constructing the tree with incomplete, but fairly correct data. This subsampling is justified by Hoeffding’s Inequality, giving these particular Choice Timber additionally their names: Hoeffding Timber.

    Furthermore, we don’t even need to retailer the subsamples, we simply need to preserve observe of what number of samples with label one (white balls) we’ve seen whereas scanning the coaching information, which is a simple and efficient technique to scale back reminiscence complexity even additional.


    We’ve solely seen vanilla Hoeffding Timber for classification duties on this article. There additionally exist algorithms for Regression and even strategies to counter the so-called idea shift, i.e. when the coaching distribution adjustments over time.

    Fortunately, these algorithms are all carried out by the builders of scikit-multiflow for Python! Allow us to do a fast take a look at.

    Enjoying round

    First, do a quick

    pip set up scikit-multiflow

    after which allow us to examine becoming a Hoeffding Tree on the whole coaching set (=sufficient RAM obtainable) vs. passing in every pattern one after the other. Extra elaborate exams like this can be present in [1].

    from sklearn.datasets import make_classification
    from sklearn.model_selection import train_test_split
    from skmultiflow.bushes import HoeffdingTree
    import matplotlib.pyplot as plt
    res = []
    # Create a dataset.
    X, y = make_classification(10000, random_state=123)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123)
    # Outline a tree for becoming the whole dataset and one for streaming.
    ht_complete = HoeffdingTree()
    ht_partial = HoeffdingTree()
    # Match the whole dataset.
    ht_complete.match(X_train, y_train)
    ht_complete_score = ht_complete.rating(X_test, y_test)
    print(f'Rating when becoming without delay: {ht_complete_score}')
    # Streaming samples one after one other.
    timer = False
    j = 0
    for i in vary(len(X_train)):
        ht_partial.partial_fit(X_train[i].reshape(1, -1), np.array([y_train[i]]))
        res.append(ht_partial.rating(X_test, y_test))
        print(f'Rating when streaming after {i} samples: {res[-1]}')
        if res[-1] >= ht_complete_score - 0.01:
            print(f'(Virtually) full rating reached! Proceed for one more {20 - j} samples.')
            timer = True
        if timer:
            j += 1
            if j == 20:
                break
    # Plot the scores after every pattern.
    plt.determine(figsize=(12, 6))
    plt.plot([0, i], [ht_complete_score, ht_complete_score], '--', label='Hoeffding Tree constructed without delay')
    plt.plot(res, label='Incrementally constructed Hoeffding Tree')
    plt.xlabel('Variety of Samples', fontsize=15)
    plt.ylabel('Accuracy', fontsize=15)
    plt.title('Becoming a Hoeffding Tree without delay (sufficient Reminiscence obtainable) vs becoming it by way of Streaming', fontsize=20)
    plt.legend()

    The ensuing graphic would possibly seem like this:

    We can see that building a Hoeffding Tree H directly yields an accuracy of about 91% (on a test set). If we build another Hoeffding Tree by feeding in each sample one after another, we can see that the performance approaches the performance of H. After about 50 samples, our streaming Hoeffding Tree has an accuracy of about 88% already. If we pass in more samples, 91% is reached at some point (for me it was after about 4000 samples out of the 8000 training samples).
    We are able to see that constructing a Hoeffding Tree H immediately yields an accuracy of about 91% (on a take a look at set). If we construct one other Hoeffding Tree by feeding in every pattern one after one other, we will see that the efficiency approaches the efficiency of H. After about 50 samples, our streaming Hoeffding Tree has an accuracy of about 88% already. If we go in additional samples, 91% is reached in some unspecified time in the future (for me it was after about 4000 samples out of the 8000 coaching samples).

    References

    [1] P. Domingos and G. Hulten, Mining High-Speed Data Streams (2000), Proceedings of the Sixth ACM SIGKDD Worldwide Convention on Data Discovery and Information Mining

    [2] W. Hoeffding, Probability inequalities for sums of bounded random variables (1962), Journal of the American Statistical Affiliation. 58 (301)


    I hope that you just discovered one thing new, attention-grabbing, and helpful right this moment. Thanks for studying!

    Because the final level, in the event you

    1. need to help me in writing extra about Machine Learning and
    2. plan to get a Medium subscription anyway,

    why not do it via this link? This may assist me so much! 😊

    To be clear, the worth for you doesn’t change, however about half of the subscription charges go on to me.

    Thanks so much, in the event you take into account supporting me!

    In case you have any questions, write me on LinkedIn!



    Source link
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleA Practical Guide to Moving Average Methods — Part Three: ARIMA Forecasting | by Ali kamali | Feb, 2025
    Next Article Gen Alpha’s Side Hustles and $11.3 Billion Spending Power
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    How to Apply the Central Limit Theorem to Constrained Data | by Ryan Burn | Dec, 2024

    December 11, 2024

    Microsoft Hikes Prices for Xbox Consoles, Controllers, Games

    May 4, 2025

    The Challenges of Implementing AI in Investment Firms

    May 7, 2025
    Our Picks

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.