Close Menu
    Trending
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»ML Regression Model Built with PyTorch to Predict Concrete Compressive Strength | by Antranig Khecho | Jun, 2025
    Machine Learning

    ML Regression Model Built with PyTorch to Predict Concrete Compressive Strength | by Antranig Khecho | Jun, 2025

    Team_AIBS NewsBy Team_AIBS NewsJune 2, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Lastly!!! Let’s construct and practice the regression mannequin…

    Let’s begin by putting in the required Python libraries to construct, practice, and consider our regression mannequin.

    pip set up matplotlib
    pip set up numpy
    pip set up pandas
    pip set up scikit-learn
    pip insatll torch
    pip insatll tqdm

    As soon as crucial packages are put in, let’s load the dataset that I put in as a CSV format from the hyperlink above utilizing pandas. Then, separate enter variables kind the output variable, and break up it into coaching and testing subsets utilizing scikit-learn.

    import pandas as pd
    from sklearn.model_selection import train_test_split

    # Load 'Information.csv' and break up it into coaching and testing subsets.
    df = pd.read_csv("D:Regressioncontent materialInformation.csv")
    X = df.drop(columns='Concrete compressive energy(MPa)')
    y = df['Concrete compressive strength(MPa)']
    X_train_original, X_test_original, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=50)

    Subsequent, let’s apply commonplace scaling to the enter variables, as they fluctuate considerably in vary: Some options have slender worth ranges, whereas others are a lot wider. This inconsistency can negatively impression the efficiency of many machine studying fashions. To handle this, we’ll use scikit-learn’s StandardScaler to normalize the options and guarantee they contribute equally throughout coaching.

    from sklearn.preprocessing import StandardScaler

    # Standardize coaching and testing enter subsets.
    scaler = StandardScaler()
    scaler.match(X_train_original)
    X_train = scaler.rework(X_train_original)
    X_test = scaler.rework(X_test_original)

    The ultimate step earlier than constructing and coaching the regression mannequin is to transform the coaching and testing subsets into 2D PyTorch tensors, the required information construction for feeding inputs into PyTorch fashions.

    import torch

    # Convert coaching and testing subsets to 2D PyTorch tensors.
    X_train_tensor = torch.tensor(X_train, dtype=torch.float32)
    y_train_tensor = torch.tensor(y_train.values, dtype=torch.float32).reshape(-1, 1)
    X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
    y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).reshape(-1, 1)

    Now, let’s construct our regression mannequin utilizing the pyramid construction of a neural community; an architectural sample during which the variety of neurons in every layer steadily decreases from the enter layer to the output layer, forming a pyramid-like form.

    Within the mannequin outlined beneath, the primary layer incorporates 150 neurons, whereas the ultimate output layer consists of a single neuron that predicts the concrete compressive energy.

    We’ll use a studying price of 0.0001, which controls how shortly the mannequin updates its weights throughout coaching, serving to the loss operate steadily converge to a minimal.

    import torch.nn as nn
    import torch.optim as optim

    # Outline the regression mannequin.
    layer_01_units = 150
    layer_02_units = 100
    layer_03_units = 40
    layer_04_units = 10
    regression_model = nn.Sequential(
    nn.Linear(X_train_tensor.form[1], layer_01_units),
    nn.ReLU(),
    nn.Linear(layer_01_units, layer_02_units),
    nn.ReLU(),
    nn.Linear(layer_02_units, layer_03_units),
    nn.ReLU(),
    nn.Linear(layer_03_units, layer_04_units),
    nn.ReLU(),
    nn.Linear(layer_04_units, y_train_tensor.form[1])
    )
    loss_function = nn.MSELoss() # Imply Sq. Error
    optimizer = optim.Adam(regression_model.parameters(), lr=0.0001)

    Now, let’s run the coaching loop and show its progress utilizing the tqdm package deal for a clear, real-time progress bar.

    Throughout every iteration (epoch), the mannequin will:

    1. Course of the coaching information and compute predictions.
    2. Calculate the loss.
    3. Replace the weights utilizing the optimizer primarily based on the loss worth.

    On the finish of every epoch, we’ll consider the mannequin’s efficiency and save the mannequin state if it achieves one of the best outcomes to date.

    import copy
    from tqdm import tqdm

    # Prepare the regression mannequin and maintain one of the best state.
    number_of_epochs = 10000
    best_mse = np.inf # Initialize to infinity
    best_weights = None
    historical past = []

    for epoch in tqdm(vary(number_of_epochs)):
    regression_model.practice()
    # Ahead move.
    y_pred = regression_model(X_train_tensor)
    loss = loss_function(y_pred, y_train_tensor)
    # Backword move.
    optimizer.zero_grad()
    loss.backward()
    # Replace weights.
    optimizer.step()
    # Consider accuracy at finish of every epoch.
    regression_model.eval()
    y_pred = regression_model(X_test_tensor)
    mse = loss_function(y_pred, y_test_tensor)
    mse = float(mse)
    historical past.append(mse)
    if mse < best_mse:
    best_mse = mse
    best_weights = copy.deepcopy(regression_model.state_dict())

    Lastly, let’s visualize how the coaching course of drives the loss operate to steadily converge towards a minimal. After that, we’ll move a random pattern from the testing subset to the mannequin, permitting it to foretell the concrete compressive energy, after which evaluate the prediction with the precise worth to evaluate its efficiency.

    import matplotlib.pyplot as plt
    import pandas as pd
    import torch

    # Print finest accuracy and historical past.
    print("MSE: %.2f" % best_mse)
    print("RMSE: %.2f" % np.sqrt(best_mse))
    plt.plot(historical past)
    plt.present()

    # Consider the mannequin on random testing samples.
    regression_model.load_state_dict(best_weights)
    regression_model.eval()
    with torch.no_grad():
    X_test_random_samples = X_test_original.pattern(n=10)
    for i in vary(len(X_test_random_samples)):
    X_test_sample = X_test_random_samples[i: i+1]
    X_test_sample_scaled = scaler.rework(X_test_sample)
    X_test_sample_tensor = torch.tensor(X_test_sample_scaled, dtype=torch.float32)
    y_pred = regression_model(X_test_sample_tensor)
    p = f"{X_test_sample.iloc[0].title:<6} {X_test_sample.iloc[0].iloc[0]:<7} {X_test_sample.iloc[0].iloc[1]:<7} {X_test_sample.iloc[0].iloc[2]:<7} {X_test_sample.iloc[0].iloc[3]:<7} {X_test_sample.iloc[0].iloc[4]:<7} {X_test_sample.iloc[0].iloc[5]:<7} {X_test_sample.iloc[0].iloc[6]:<7} {X_test_sample.iloc[0].iloc[7]:<7}"
    print(f"{p} ====> Predicted: {y_pred[0, 0].merchandise():.2f} (Anticipated: {y_test[X_test_sample.iloc[0].title]})")

    (MSE: Mean Square Error) Loss Convergence Toward the Minimum during 10,000 Training Epochs
    (MSE: Imply Sq. Error) Loss Convergence Towards the Minimal throughout 10,000 Coaching Epochs



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThomson Reuters Launches Agentic AI for Tax, Audit and Accounting
    Next Article Vision Transformer on a Budget
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025
    Machine Learning

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025
    Machine Learning

    🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Kroger Replaces CEO Rodney McMullen: Personal Conduct Investigation

    March 3, 2025

    Apple takes legal action in UK data privacy row

    March 4, 2025

    How to Stop Being a Micromanager

    December 11, 2024
    Our Picks

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025

    Musk’s X appoints ‘king of virality’ in bid to boost growth

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.