Close Menu
    Trending
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»EUR/USD Price Prediction using CNN-LSTM | by Egemen Candir | Dec, 2024
    Machine Learning

    EUR/USD Price Prediction using CNN-LSTM | by Egemen Candir | Dec, 2024

    Team_AIBS NewsBy Team_AIBS NewsDecember 30, 2024No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    This isn’t your on daily basis “predicting the longer term costs” put up. It’s merely a presentation on use publicly accessible LLM instruments to give you one thing cool and make it sound even cooler utilizing Google NotebookLM 🙂

    The Temporary

    I’m not an knowledgeable Python coder. I perceive how deep studying strategies like CNN and LSTM work however I’m not ok (but) to correctly construction a complete mannequin from scratch and even modify the hyperparameters in a great way. What I’ve is deep buying and selling, market expertise and I’ve the logic. Every thing on this article is an AI technology with the plain fancy header image.
    If you want to pay attention a VERY poshed up podcast on my wonderful code that I got here up with no single indent, listed below are the NotebookLM Podcast and the briefing doc:
    Briefing Doc

    AI Assisted GitHub Repo

    Since I’m not a sophisticated coder, I used to simply include single file scripts and posted them on my GitHub web page. No extra! Right here’s the ChatGpt urged repo web page. I didn’t add a single phrase anyplace right here:
    https://github.com/egemen-candir/EURUSD-Price-Prediction-Using-Hybrid-LSTM-CNN

    Hey AI, Discover me free information and obtained fetch it!

    AI right this moment can create complete apps utilizing a few sentences however then these apps gained’t be nearly as good. I made a decision to direct all three AI utilized right here (Claude, MS Copilot, ChatGPT) to the place I needed to go. Right here’s what I did step-by-step:
    – Requested about free information sources the place I can discover intraday Eur/Usd information. A few of the options didn’t have it in any respect, some had very restricted timeframe availability. In the long run, I settled with https://twelvedata.com/ however then hit on a 5000 information level API pull restrict and a 8 pull per minute restrict. Aggravated after hitting the boundaries a few occasions, I instructed the AI to code me a python script to obtain 15 minutes information in 45 days batches. Do it in 5 minute breaks (pauses). Simple peasy!

    import pandas as pd
    import requests
    import time

    # API Key
    api_key = ''

    # Initialize variables
    image = 'EUR/USD'
    interval = '15min'
    start_date = '2014-01-01'
    end_date = '2024-01-01'
    batch_size = 45 # Variety of days per batch

    # Perform to fetch information for a particular date vary
    def fetch_data(begin, finish):
    url = f'https://api.twelvedata.com/time_series?apikey={api_key}&image={image}&interval={interval}&start_date={begin}&end_date={finish}&fmt=json'
    response = requests.get(url)
    return response.json()

    # Initialize DataFrame to carry all information
    all_data = pd.DataFrame()

    # Loop by 2-year batches
    for 12 months in vary(2014, 2024, 2):
    current_start = pd.to_datetime(f'{12 months}-01-01')
    current_end = current_start + pd.DateOffset(years=2)

    if current_end > pd.to_datetime(end_date):
    current_end = pd.to_datetime(end_date)

    # Fetch information in smaller batches to keep away from API limits
    whereas current_start < current_end:
    batch_end = current_start + pd.DateOffset(days=batch_size)
    if batch_end > current_end:
    batch_end = current_end

    information = fetch_data(current_start.strftime('%Y-%m-%d'), batch_end.strftime('%Y-%m-%d'))
    if 'values' in information:
    df = pd.DataFrame(information['values'])
    all_data = pd.concat([all_data, df], ignore_index=True)
    print(f"Fetched information from {current_start.strftime('%Y-%m-%d')} to {batch_end.strftime('%Y-%m-%d')}")
    else:
    print(f"Didn't fetch information: {information}")
    break

    current_start = batch_end
    # Pause to keep away from API charge limits
    time.sleep(300) # 5-minute pause

    # Save information to CSV file
    all_data.to_csv('eurusd_15min_data.csv', index=False)
    print(f"Knowledge saved to eurusd_15min_data.csv")

    Hey AI, I’ll want some options and a few pre-processing!

    Certain boss! Effectively, the momentum one is my factor. No marvel NotebookLM podcasters cherished it a lot! 🙂

    # Technical indicators calculation capabilities
    def calculate_rsi(costs, durations=14):
    delta = costs.diff()
    acquire = (delta.the place(delta > 0, 0)).rolling(window=durations).imply()
    loss = (-delta.the place(delta < 0, 0)).rolling(window=durations).imply()
    rs = acquire / loss
    return 100 - (100 / (1 + rs))

    def calculate_macd(costs, quick=12, sluggish=26):
    exp1 = costs.ewm(span=quick, modify=False).imply()
    exp2 = costs.ewm(span=sluggish, modify=False).imply()
    return exp1 - exp2

    def calculate_atr(information, interval=14):
    excessive = information['high']
    low = information['low']
    shut = information['close']

    tr1 = excessive - low
    tr2 = abs(excessive - shut.shift())
    tr3 = abs(low - shut.shift())

    tr = pd.concat([tr1, tr2, tr3], axis=1).max(axis=1)
    return tr.rolling(window=interval).imply()

    def calculate_momentum(information):
    """
    Calculate momentum because the cumulative sum of the primary spinoff
    of the 5-period transferring common over the previous 5 durations
    """
    # Calculate first spinoff of 5-period MA
    spinoff = information['SMA_5'].diff()

    # Calculate rolling sum of derivatives over previous 5 durations
    cumulative_momentum = spinoff.rolling(window=5).sum()

    return cumulative_momentum

    # Load and preprocess information
    all_data = pd.read_csv('eurusd_15min_data.csv')

    # Print columns to confirm accessible options
    print("Out there columns:", all_data.columns.tolist())

    # Convert datetime to pandas datetime and set as index
    all_data['datetime'] = pd.to_datetime(all_data['datetime'])
    all_data.set_index('datetime', inplace=True)
    all_data = all_data.sort_index()

    # Show the variety of information factors
    num_data_points = len(all_data)
    print(f"Variety of information factors: {num_data_points}")

    # Calculate technical indicators
    all_data['SMA_5'] = all_data['close'].rolling(window=5).imply()
    all_data['SMA_20'] = all_data['close'].rolling(window=20).imply()
    all_data['RSI'] = calculate_rsi(all_data['close'], durations=14)
    all_data['MACD'] = calculate_macd(all_data['close'])
    all_data['ATR'] = calculate_atr(all_data[['high', 'low', 'close']], interval=14)
    all_data['Momentum'] = calculate_momentum(all_data)

    # Drop NaN values
    all_data.dropna(inplace=True)

    OK AI, I’m pondering an excessive amount of. Simply throw in some deep studying issues!

    Effectively getting AI do that half wasn’t that simple clearly however you get the joke I assume 🙂

    # Outline options
    options = ['close', 'high', 'low', 'SMA_5', 'SMA_20', 'RSI', 'MACD', 'ATR', 'Momentum']
    sequence_length = 10

    # Scale the options
    scaler = MinMaxScaler()
    scaled_data = scaler.fit_transform(all_data[features])

    # Create sequences
    X = []
    y = []
    for i in vary(sequence_length, len(scaled_data)):
    X.append(scaled_data[i-sequence_length:i])
    y.append(scaled_data[i, 0]) # 0 index corresponds to 'shut' value
    X, y = np.array(X), np.array(y)

    # Break up the information
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)

    # Construct the hybrid CNN-LSTM mannequin
    mannequin = Sequential([
    # CNN layers
    Conv1D(filters=64, kernel_size=3, activation='relu',
    input_shape=(sequence_length, len(features))),
    BatchNormalization(),
    MaxPooling1D(pool_size=2),
    Dropout(0.2),

    Conv1D(filters=128, kernel_size=3, activation='relu'),
    BatchNormalization(),
    MaxPooling1D(pool_size=2),
    Dropout(0.2),

    # LSTM layers
    LSTM(100, return_sequences=True),
    BatchNormalization(),
    Dropout(0.2),

    LSTM(50),
    BatchNormalization(),
    Dropout(0.2),

    # Dense layers
    Dense(50, activation='relu'),
    BatchNormalization(),
    Dropout(0.2),
    Dense(1)
    ])

    # Compile the mannequin
    mannequin.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
    loss='huber',
    metrics=['mae'])

    # Outline callbacks
    callbacks = [
    EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True),
    ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=0.0001)
    ]

    # Practice the mannequin
    historical past = mannequin.match(
    X_train, y_train,
    validation_data=(X_test, y_test),
    epochs=100,
    batch_size=32,
    callbacks=callbacks,
    verbose=1
    )

    # Make predictions
    predicted_scaled = mannequin.predict(X_test)

    # Put together for inverse remodel
    pred_full = np.zeros((len(predicted_scaled), len(options)))
    pred_full[:, 0] = predicted_scaled.flatten() # Put predictions in first column (shut value)
    y_test_full = np.zeros((len(y_test), len(options)))
    y_test_full[:, 0] = y_test # Put precise values in first column

    # Inverse remodel
    predicted_prices = scaler.inverse_transform(pred_full)[:, 0] # Get solely the shut value
    actual_prices = scaler.inverse_transform(y_test_full)[:, 0] # Get solely the shut value

    And Consider!

    Outcomes are wonderful! Now I can get wealthy with out spending a dime!!

    # Calculate and show metrics
    mae = mean_absolute_error(actual_prices, predicted_prices)
    rmse = np.sqrt(mean_squared_error(actual_prices, predicted_prices))
    print(f'nModel Efficiency Metrics:')
    print(f'MAE: {mae:.4f}')
    print(f'RMSE: {rmse:.4f}')

    # Plot precise vs predicted costs
    plt.determine(figsize=(12, 6))
    plt.plot(actual_prices, label='Precise Costs')
    plt.plot(predicted_prices, label='Predicted Costs')
    plt.title('Precise vs Predicted Costs')
    plt.xlabel('Time Steps')
    plt.ylabel('Value')
    plt.legend()
    plt.present()

    # Plot coaching historical past
    plt.determine(figsize=(12, 6))
    plt.plot(historical past.historical past['loss'], label='Coaching Loss')
    plt.plot(historical past.historical past['val_loss'], label='Validation Loss')
    plt.title('Mannequin Loss')
    plt.xlabel('Epoch')
    plt.ylabel('Loss')
    plt.legend()
    plt.present()

    # Plot delta between precise and predicted costs
    delta = actual_prices - predicted_prices
    plt.determine(figsize=(12, 6))
    plt.plot(delta, label='Value Distinction')
    plt.title('Delta Between Precise and Predicted Costs')
    plt.xlabel('Time Steps')
    plt.ylabel('Value Distinction')
    plt.legend()
    plt.present()

    # Extra plot for momentum visualization
    plt.determine(figsize=(12, 6))
    plt.plot(all_data['Momentum'].iloc[-len(actual_prices):].values, label='Momentum')
    plt.title('Cumulative Momentum')
    plt.xlabel('Time Steps')
    plt.ylabel('Momentum Worth')
    plt.legend()
    plt.present()

    Conclusion and What It Really Took

    So I’ve spent two not-so-hardly-working-on-this days for this challenge. The outcomes are promising. Perhaps a bit too promising to be true 🙂 The lesson is that AI is able to developing with a reasonably good script even with the instruction of a not-so-much-of-a-developer man like me. I’m fairly certain it may be used to superb outcomes to create scripts for issues like primary portfolio optimization, easy sign technology duties. For anything, you continue to want a human to consistently test what it does.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleJean Sammet: An Accidental Computer Programmer
    Next Article Ten Predictions for Data Science and AI in 2025 | by Jason Tamara Widjaja | Dec, 2024
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025
    Machine Learning

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025
    Machine Learning

    🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Cybersecurity in the Public Cloud: Best Practices for Australian Businesses

    January 21, 2025

    Transform Complexity into Opportunity with Digital Engineering

    July 1, 2025

    Pandas Can’t Handle This: How ArcticDB Powers Massive Datasets

    February 13, 2025
    Our Picks

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.