Close Menu
    Trending
    • Revisiting Benchmarking of Tabular Reinforcement Learning Methods
    • Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025
    • Qantas data breach to impact 6 million airline customers
    • He Went From $471K in Debt to Teaching Others How to Succeed
    • An Introduction to Remote Model Context Protocol Servers
    • Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025
    • AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?
    • Why Your Finance Team Needs an AI Strategy, Now
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Time Series Forecasting Made Simple (Part 1): Decomposition and Baseline Models
    Artificial Intelligence

    Time Series Forecasting Made Simple (Part 1): Decomposition and Baseline Models

    Team_AIBS NewsBy Team_AIBS NewsApril 9, 2025No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I to keep away from time sequence evaluation. Each time I took an internet course, I’d see a module titled “Time Sequence Evaluation” with subtopics like Fourier Transforms, autocorrelation capabilities and different intimidating phrases. I don’t know why, however I all the time discovered a cause to keep away from it.

    However right here’s what I’ve discovered: any advanced subject turns into manageable once we begin from the fundamentals and concentrate on understanding the instinct behind it. That’s precisely what this weblog sequence is about : making time sequence really feel much less like a maze and extra like a dialog together with your information over time.

    We perceive advanced matters rather more simply once they’re defined via real-world examples. That’s precisely how I’ll method this sequence.

    In every submit, we’ll work with a easy dataset and discover what’s wanted from a time sequence perspective. We’ll construct instinct round every idea, perceive why it issues, and implement it step-by-step on the info.

    Time Sequence Evaluation is the method of understanding, modeling and Forecasting information that’s noticed over time. It entails figuring out patterns reminiscent of developments, seasonality and noise utilizing previous observations to make knowledgeable predictions about future values.

    Let’s begin by contemplating a dataset named Daily Minimum Temperatures in Melbourne (). This dataset incorporates every day data of the bottom temperature (in Celsius) noticed in Melbourne, Australia, over a 10-year interval from 1981 to 1990. Every entry consists of simply two columns:

    Date: The calendar day (from 1981-01-01 to 1990-12-31)
    Temp: The minimal temperature recorded on that day

    You’ve in all probability heard of fashions like ARIMA, SARIMA or Exponential Smoothing. However earlier than we go there, it’s a good suggestion to check out some easy baseline fashions first, to see how effectively a primary method performs on our information.

    Whereas there are lots of forms of baseline fashions utilized in time sequence forecasting, right here we’ll concentrate on the three most important ones, that are easy, efficient, and broadly relevant throughout industries.

    Naive Forecast: Assumes the subsequent worth would be the identical because the final noticed one.
    Seasonal Naive Forecast: Assumes the worth will repeat from the identical level final season (e.g., final week or final month).
    Shifting Common: Takes the typical of the final n factors.

    You is perhaps questioning, why use baseline fashions in any respect? Why not simply go straight to the well-known forecasting strategies like ARIMA or SARIMA?

    Let’s take into account a store proprietor who needs to forecast subsequent month’s gross sales. By making use of a transferring common baseline mannequin, they’ll estimate subsequent month’s gross sales because the common of earlier months. This easy method may already ship round 80% accuracy — ok for planning and stock choices.

    Now, if we swap to a extra superior mannequin like ARIMA or SARIMA, we’d enhance accuracy to round 85%. However the important thing query is: is that additional 5% well worth the extra time, effort and sources? On this case, the baseline mannequin does the job.

    The truth is, in most on a regular basis enterprise situations, baseline fashions are adequate. We usually flip to classical fashions like ARIMA or SARIMA in high-impact industries reminiscent of finance or vitality, the place even a small enchancment in accuracy can have a big monetary or operational influence. Even then, a baseline mannequin is normally utilized first — not solely to supply fast insights but additionally to behave as a benchmark that extra advanced fashions should outperform.

    Okay, now that we’re able to implement some baseline fashions, there’s one key factor we have to perceive first:
    Each time sequence is made up of three important parts — development, seasonality and residuals.

    Time sequence decomposition separates information into development, seasonality and residuals (noise), serving to us uncover the true patterns beneath the floor. This understanding guides the selection of forecasting fashions and improves accuracy. It’s additionally an important first step earlier than constructing each easy and superior forecasting options.

    Pattern
    That is the general route your information is transferring in over time — going up, down or staying flat.
    Instance: Regular lower in month-to-month cigarette gross sales.

    Seasonality
    These are the patterns that repeat at common intervals — every day, weekly, month-to-month or yearly.
    Instance: Cool drinks gross sales in summer time.

    Residuals (Noise)
    That is the random “leftover” a part of the info, the unpredictable ups and downs that may’t be defined by development or seasonality.
    Instance: A one-time automotive buy exhibiting up in your month-to-month expense sample.

    Now that we perceive the important thing parts of a time sequence, let’s put that into observe utilizing an actual dataset: Each day Minimal Temperatures in Melbourne, Australia.

    We’ll use Python to decompose the time sequence into its development, seasonality, and residual parts so we are able to higher perceive its construction and select an acceptable baseline mannequin.

    :

    import pandas as pd
    import matplotlib.pyplot as plt
    from statsmodels.tsa.seasonal import seasonal_decompose
    
    # Load the dataset
    df = pd.read_csv("minimal every day temperatures information.csv")
    
    # Convert 'Date' to datetime and set as index
    df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
    df.set_index('Date', inplace=True)
    
    # Set an everyday every day frequency and fill lacking values utilizing ahead fill
    df = df.asfreq('D')
    df['Temp'].fillna(technique='ffill', inplace=True)
    
    # Decompose the every day sequence (365-day seasonality for yearly patterns)
    decomposition = seasonal_decompose(df['Temp'], mannequin='additive', interval=365)
    
    # Plot the decomposed parts
    decomposition.plot()
    plt.suptitle('Decomposition of Each day Minimal Temperatures (Each day)', fontsize=14)
    plt.tight_layout()
    plt.present()
    

    Output:

    Decomposition of every day temperatures exhibiting development, seasonal cycles and random fluctuations.

    The decomposition plot clearly reveals a robust seasonal sample that repeats every year, together with a delicate development that shifts over time. The residual element captures the random noise that isn’t defined by development or seasonality.

    Within the code earlier, you may need seen that I used an additive mannequin for decomposing the Time Series. However what precisely does that imply — and why is it the precise selection for this dataset?

    Let’s break it down.
    In an additive mannequin, we assume Pattern, Seasonality and Residuals (Noise) mix linearly, like this:
    Y = T ​+ S ​+ R​

    The place:
    Y is the precise worth at time t
    T​ is the development
    S is the seasonal element
    R is the residual (random noise)

    This implies we’re treating the noticed worth because the sum of the elements, every element contributes independently to the ultimate output.

    I selected the additive mannequin as a result of once I seemed on the sample in every day minimal temperatures, I seen one thing vital:

    The road plot above reveals the every day minimal temperatures from 1981 to 1990. We are able to clearly see a powerful seasonal cycle that repeats every year, colder temperatures in winter, hotter in summer time.

    Importantly, the amplitude of those seasonal swings stays comparatively constant through the years. For instance, the temperature distinction between summer time and winter doesn’t seem to develop or shrink over time. This stability in seasonal variation is a key signal that the additive mannequin is acceptable for decomposition, for the reason that seasonal element seems to be impartial of any development.

    We use an additive mannequin when the development is comparatively steady and doesn’t amplify or distort the seasonal sample, and when the seasonality stays inside a constant vary over time, even when there are minor fluctuations.

    Now that we perceive how the additive mannequin works, let’s discover the multiplicative mannequin — which is usually used when the seasonal impact scales with the development which will even assist us perceive the additive mannequin extra clearly.

    Take into account a family’s electrical energy consumption. Suppose the family makes use of 20% extra electrical energy in summer time in comparison with winter. Which means the seasonal impact isn’t a set quantity — it’s a proportion of their baseline utilization.

    Let’s see how this appears with actual numbers:

    In 2021, the family used 300 kWh in winter and 360 kWh in summer time (20% greater than winter).

    In 2022, their winter consumption elevated to 330 kWh, and summer time utilization rose to 396 kWh (nonetheless 20% greater than winter).

    In each years, the seasonal distinction grows with the development   from +60 kWh in 2021 to +66 kWh in 2022   regardless that the share enhance stays the identical. That is precisely the type of habits {that a} multiplicative mannequin captures effectively.

    In mathematical phrases:
    Y = T ×S ×R 
    The place:
    Y​: Noticed worth
    T: Pattern element
    S: Seasonal element
    R​: Residual (noise)

    By trying on the decomposition plot, we are able to determine whether or not an additive or multiplicative mannequin suits our information higher.

    There are additionally different highly effective decomposition instruments out there, which I’ll be masking in certainly one of my upcoming weblog posts.Now that we’ve got a transparent understanding of additive and multiplicative fashions, let’s shift our focus to making use of a baseline mannequin that matches this dataset.

    Based mostly on the decomposition plot, we are able to see a powerful seasonal sample within the information, which suggests {that a} Seasonal Naive mannequin is perhaps a very good match for this time sequence.

    This mannequin assumes that the worth at a given time would be the identical because it was in the identical interval of the earlier season — making it a easy but efficient selection when seasonality is dominant and constant. For instance, if temperatures usually observe the identical yearly cycle, then the forecast for July 1st, 1990, would merely be the temperature recorded on July 1st, 1989.

    Code:

    import pandas as pd
    import matplotlib.pyplot as plt
    import numpy as np
    
    # Load the dataset
    df = pd.read_csv("minimal every day temperatures information.csv")
    
    # Convert 'Date' column to datetime and set as index
    df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
    df.set_index('Date', inplace=True)
    
    # Guarantee common every day frequency and fill lacking values
    df = df.asfreq('D')
    df['Temp'].fillna(technique='ffill', inplace=True)
    
    # Step 1: Create the Seasonal Naive Forecast
    seasonal_period = 365  # Assuming yearly seasonality for every day information
    # Create the Seasonal Naive forecast by shifting the temperature values by twelve months
    df['Seasonal_Naive'] = df['Temp'].shift(seasonal_period)
    
    # Step 2: Plot the precise vs forecasted values
    # Plot the final 2 years (730 days) of information to match
    plt.determine(figsize=(12, 5))
    plt.plot(df['Temp'][-730:], label='Precise')
    plt.plot(df['Seasonal_Naive'][-730:], label='Seasonal Naive Forecast', linestyle='--')
    plt.title('Seasonal Naive Forecast vs Precise Temperatures')
    plt.xlabel('Date')
    plt.ylabel('Temperature (°C)')
    plt.legend()
    plt.tight_layout()
    plt.present()
    
    # Step 3: Consider utilizing MAPE (Imply Absolute Share Error)
    # Use the final twelve months for testing
    take a look at = df[['Temp', 'Seasonal_Naive']].iloc[-365:].copy()
    take a look at.dropna(inplace=True)
    
    # MAPE Calculation
    mape = np.imply(np.abs((take a look at['Temp'] - take a look at['Seasonal_Naive']) / take a look at['Temp'])) * 100
    print(f"MAPE (Seasonal Naive Forecast): {mape:.2f}%")

    Output:

    Seasonal Naive Forecast vs. Precise Temperatures (1989–1990)

    To maintain the visualization clear and centered, we’ve plotted the final two years of the dataset (1989–1990) as a substitute of all 10 years.

    This plot compares the precise every day minimal temperatures in Melbourne with the values predicted by the Seasonal Naive mannequin, which merely assumes that every day’s temperature would be the identical because it was on the identical day one yr in the past.

    As seen within the plot, the Seasonal Naive forecast captures the broad form of the seasonal cycles fairly effectively — it mirrors the rise and fall of temperatures all year long. Nonetheless, it doesn’t seize day-to-day variations, nor does it reply to slight shifts in seasonal timing. That is anticipated, because the mannequin is designed to repeat the earlier yr’s sample precisely, with out adjusting for development or noise.

    To guage how effectively this mannequin performs, we calculate the Imply Absolute Share Error (MAPE) over the ultimate twelve months of the dataset (i.e., 1990). We solely use this era as a result of the Seasonal Naive forecast wants a full yr of historic information earlier than it may well start making predictions.

    Imply Absolute Share Error (MAPE) is a generally used metric to judge the accuracy of forecasting fashions. It measures the common absolute distinction between the precise and predicted values, expressed as a share of the particular values.

    In time sequence forecasting, we usually consider mannequin efficiency on the most up-to-date or goal time interval — not on the center years. This displays how forecasts are utilized in the true world: we construct fashions on historic information to foretell what’s coming subsequent.

    That’s why we calculate MAPE solely on the remaining twelve months of the dataset — this simulates forecasting for a future and provides us a sensible measure of how effectively the mannequin would carry out in observe.

    A MAPE of 28.23%, which provides us a baseline degree of forecasting error. Any mannequin we construct subsequent — whether or not it’s custom-made or extra superior, ought to goal to outperform this benchmark.

    A MAPE of 28.23% signifies that, on common, the mannequin’s predictions had been 28.23% off from the precise every day temperature values over the past yr.

    In different phrases, if the true temperature on a given day was 10°C, the Seasonal Naïve forecast may need been round 7.2°C or 12.8°C, reflecting a 28% deviation.

    I’ll dive deeper into analysis metrics in a future submit.

    On this submit, we laid the inspiration for time sequence forecasting by understanding how real-world information may be damaged down into development, seasonality, and residuals via decomposition. We explored the distinction between additive and multiplicative fashions, carried out the Seasonal Naive baseline forecast and evaluated its efficiency utilizing MAPE.

    Whereas the Seasonal Naive mannequin is straightforward and intuitive, it comes with limitations particularly for this dataset. It assumes that the temperature on any given day is equivalent to the identical day final yr. However because the plot and MAPE of 28.23% confirmed, this assumption doesn’t maintain completely. The info shows slight shifts in seasonal patterns and long-term variations that the mannequin fails to seize.

    Within the subsequent a part of this sequence, we’ll go additional. We’ll discover learn how to customise a baseline mannequin, examine it to the Seasonal Naive method and consider which one performs higher utilizing error metrics like MAPE, MAE and RMSE.

    We’ll additionally start constructing the inspiration wanted to know extra superior fashions like ARIMA together with key ideas reminiscent of:

    • Stationarity
    • Autocorrelation and Partial Autocorrelation 
    • Differencing
    • Lag-based modeling (AR and MA phrases)

    Half 2 will dive into these matters in additional element, beginning with customized baselines and ending with the foundations of ARIMA.

    Thanks for studying.  I hope you discovered this submit useful and insightful.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Role of AI in Crypto: Can Machines Predict Market Trends? | by Roqqu Pay | Apr, 2025
    Next Article I Employ 75 People Across 10 Countries — Here Are the 3 Skills That Helped Me Build My Global Team
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025
    Artificial Intelligence

    An Introduction to Remote Model Context Protocol Servers

    July 2, 2025
    Artificial Intelligence

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Josef Fares says studios should ‘stick to vision’

    March 15, 2025

    JPMorgan CEO Jamie Dimon: ‘I Hugged It Out’ With Elon Musk

    January 23, 2025

    Comparing Free and Paid AI Podcast Generators

    February 21, 2025
    Our Picks

    Revisiting Benchmarking of Tabular Reinforcement Learning Methods

    July 2, 2025

    Is Your AI Whispering Secrets? How Scientists Are Teaching Chatbots to Forget Dangerous Tricks | by Andreas Maier | Jul, 2025

    July 2, 2025

    Qantas data breach to impact 6 million airline customers

    July 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.