Close Menu
    Trending
    • How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures
    • Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025
    • How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins
    • Become a Better Data Scientist with These Prompt Engineering Tips and Tricks
    • Meanwhile in Europe: How We Learned to Stop Worrying and Love the AI Angst | by Andreas Maier | Jul, 2025
    • Transform Complexity into Opportunity with Digital Engineering
    • OpenAI Is Fighting Back Against Meta Poaching AI Talent
    • Lessons Learned After 6.5 Years Of Machine Learning
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Movie Recommendation & Rating Prediction using KNN | by Akanksha Gupta | Feb, 2025
    Machine Learning

    Movie Recommendation & Rating Prediction using KNN | by Akanksha Gupta | Feb, 2025

    Team_AIBS NewsBy Team_AIBS NewsFebruary 26, 2025No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The speedy progress of knowledge assortment has led to the brand new period of data. This knowledge is used to create environment friendly methods and that is the place the advice system comes into play. Advice Methods are the kind of Data Filtering System that’s used to enhance the standard of search outcomes and recommend the product that’s extremely related to the searched merchandise.

    These methods are used for predicting the ranking and preferences a person would give to the product/merchandise. Advice Methods are utilized by virtually all the most important corporations. YouTube makes use of it to advocate what movies needs to be performed subsequent. Amazon makes use of it to advocate what merchandise a person would possibly buy primarily based on earlier buy historical past. Instagram suggests what account you would possibly comply with primarily based in your following record.

    Corporations like Netflix and Spotify Rely extremely on such methods for his or her efficient enterprise progress and success.

    There are several types of filtering strategies used. Some are as comply with:

    Demographic Filtering:

    It’s the easiest kind of filtering methodology and suggests the merchandise to the person thats already been favored by many of the different customers. It recommends the product primarily based on its reputation and recommends them to the customers with the identical demographic options. Now we have usually seen such suggestions on JioHotstar like “Prime 10 Motion pictures In India”.

    Demographic Filtering

    Collaborative Filtering:

    Collaborative filtering recommends objects primarily based on the preferences and behaviors of customers with comparable pursuits. Primarily, it identifies customers with tastes just like yours and suggests merchandise or motion pictures they’ve interacted with. For instance, if individuals with comparable preferences as yours have watched a specific film, the system might advocate it to you as properly.

    Collaborative Filtering

    Content material-Based mostly Filtering:

    Content material-based recommenders analyze person attributes corresponding to age, previous preferences, and often watched or favored content material. Based mostly on these attributes, the system suggests merchandise or content material with comparable traits. As an example, in the event you take pleasure in watching the film Sholay, the system would possibly advocate comparable motion pictures like Tirangaa and Krantiveer attributable to their comparable themes and genres.

    Content material-Based mostly Filtering

    Context-Based mostly Filtering:

    Context-based filtering is extra superior, because it considers not solely person preferences but in addition the context by which they function. Components like time of day, machine used, and site affect suggestions, making them extra customized and context-specific. For instance, a meals supply app would possibly recommend breakfast choices within the morning and dinner suggestions within the night.

    Context-Based mostly Filtering

    I’ve constructed a suggestion system utilizing the Okay-Nearest Neighbors (KNN) algorithm. Earlier than diving into the primary rationalization, let’s first talk about the KNN algorithm.

    Now, think about you’ve a dataset. You plot every commentary from the dataset into an area. Simply visualize it. Observations which are comparable to one another will probably be nearer collectively, that means the gap between them will probably be smaller.

    That is the core concept behind KNN. Right here, Okay refers back to the variety of neighbors we contemplate earlier than classifying whether or not a knowledge level is just like one other.

    On this article, we are going to describe learn how to construct a baseline film suggestion system utilizing knowledge from Kaggle’s “TMDB 5000 Film Dataset.” This dataset is a community-built Film and TV Database that incorporates in depth details about motion pictures and TV reveals.

    I’ve used a small portion of this dataset, which incorporates details about 5,000 motion pictures. The info is cut up into two CSV information:

    Motion pictures.csv

    • Finances: The finances by which the film was made.
    • Genres: The kind of film (e.g., Motion, Comedy, Thriller, and so on.).
    • Homepage: The official web site of the film.
    • Id: A novel identifier assigned to the film.
    • Key phrases: Phrases or tags associated to the film.
    • Original_language: The language by which the film was made.
    • Original_title: The unique title of the film.
    • Overview: A short description of the film’s plot.
    • Reputation: A numeric worth indicating the film’s reputation.
    • Production_companies: The manufacturing homes concerned in making the film.
    • Production_countries: The nation the place the film was produced.
    • Release_date: The film’s launch date.
    • Income: The worldwide income generated by the film.
    • Runtime: The entire period of the film in minutes.
    • Standing: Signifies whether or not the film is “Launched” or “Rumored.”
    • Tagline: The film’s tagline.
    • Title: The film’s title.
    • Vote_average: The common ranking of the film.
    • Vote_count: The variety of votes obtained for the film.

    Credit.csv

    • Movie_id: A novel identifier assigned to the film.
    • Title: The film’s title.
    • Forged: The names of the lead and supporting actors.
    • Crew: The names of key crew members, such because the director, editor, and producer.

    Step 1: Importing the Libraries:

    We started by importing the required libraries. We use pandas and numpy to carry out the operations on the info and matplotlib to show the visuals stats of the film. Then importing the csv information utilizing pd.read_csv().

    import pandas as pd
    import numpy as np
    import json
    import matplotlib.pyplot as plt
    %matplotlib inline
    import seaborn as sns
    sns.color_palette("deep")
    sns.set_style("whitegrid")
    import warnings
    warnings.filterwarnings("ignore")
    import operator
    motion pictures = pd.read_csv("/kaggle/enter/tmdb-movie-metadata/tmdb_5000_movies.csv")
    credit = pd.read_csv("/kaggle/enter/tmdb-movie-metadata/tmdb_5000_credits.csv")

    Step 2: Knowledge Exploration and Cleansing

    On this step, we view the primary few data of the info to realize a greater understanding of it.

    motion pictures.head()
    credit.head()

    Then we examine the opposite facets of the info utilizing the .describe() and .data() strategies of the pandas library. .data() methodology lists down all of the columns of the info and what number of non-null values are current within the knowledge together with the kind of the info saved like int64, object, and so on.

    .describe() is used to offer rely, imply, std, and 5 quantity abstract in regards to the numeric knowledge that features the (min, 25%, 50%, 75%, max).

    motion pictures.data()
    motion pictures.describe()
    credit.data()

    We will observe that just a few columns like genres, key phrases, production_companies, production_countries, languages have saved the info within the JSON format. Within the credit.csv dataset, forged and crew are in JSON format. For sooner and environment friendly processing of the info, we are going to first convert these JSON-object knowledge into the lists. This may enable for straightforward readability of the info.

    Usually this conversion is form of costly when it comes to the computational sources and time. Fortunately, the construction isn’t very difficult. One widespread attribute of those fields is that it incorporates a reputation key, in whose values we’re primarily .

    To carry out this conversion, we first unpack the JSON knowledge in Listing format utilizing json.hundreds() after which we iterate on the record to retrieve the values of the title key and retailer it in a brand new record. Then we exchange the JSON object with the brand new record.

    # Methodology to transform the JSON into String
    def json_to_string(column):
    motion pictures[column] = motion pictures[column].apply(json.hundreds)
    for index, i in zip(motion pictures.index, motion pictures[column]):
    li = []
    for x in i:
    li.append(x['name'])
    motion pictures.loc[index, column] = str(li)
    # Altering the genres column from JSON to String.
    json_to_string("genres")
    # Altering the forged column from JSON to string.
    credit['cast'] = credit['cast'].apply(json.hundreds)
    for index, i in zip(credit.index, credit['cast']):
    li = []
    for x in i:
    li.append(x['name'])
    credit.loc[index, 'cast'] = str(li)

    Now for the crew column we use somewhat completely different technique. The complete crew is given within the dataset. So as a substitute of specializing in everybody, we are going to simply retrieve the director and exchange the crew column with it.

    # Extracting the Director from the crew title.
    credit['crew'] = credit['crew'].apply(json.hundreds)
    def director(x):
    for i in x:
    if i['job'] == 'Director':
    return i['name']
    credit['crew'] = credit['crew'].apply(director)
    credit.rename({"crew":"director"}, axis = 1, inplace = True)

    Then, we are going to examine if all of the JSON columns are transformed to record or not utilizing motion pictures.iloc[35].

    print(f"Film:n{motion pictures.iloc[35]}n")
    print(f"Credit:n{credit.iloc[35]}")

    Step 3: Knowledge Merging & Filtering

    On this step we merge the 2 datasets motion pictures and credit primarily based on the column “id” from motion pictures.csv and “movie_id” from credit.csv file. This new knowledge is saved on a knowledge object.

    knowledge = motion pictures.merge(credit, left_on = 'id', right_on = 'movie_id', how = 'internal')

    Adopted by which we filter out the pointless columns and maintain those that we want for evaluation.

    cols = ['genres', 'id', 'keywords', 'original_title', 'popularity', 'revenue', 'runtime', 'director', 'vote_count', 'vote_average', 'production_companies', 'cast']
    knowledge = knowledge[cols]
    knowledge.head(2)

    Step 4: Working with Genres Column

    We are going to clear the style column to seek out the style record.

    knowledge['genres'] = knowledge['genres'].str.strip('[]').str.exchange(' ', '').str.exchange("'",'').str.cut up(',')

    Then we are going to generate a dictionary of the distinctive genres and their counts.

    # producing the record of distinctive style and their rely.
    style = {}
    for i in knowledge['genres']:
    for gen in i:
    if gen not in style:
    style[gen] = 0
    else:
    style[gen] = style[gen]+1
    unique_genres = record(style.keys())
    unique_genres = unique_genres[:len(unique_genres)-1]
    style = {ok : v for ok, v in sorted(style.objects(), key = lambda merchandise : merchandise[1], reverse = True)[:12]}

    Then we plot the Bar chart displaying the Prime 12 genres that seem within the knowledge to realize the understanding of the film reputation when it comes to the style.

    keys = record(style.keys())[::-1]
    vals = record(style.values())[::-1]
    fig, ax = plt.subplots(figsize=(8,5))
    ax.barh(keys, vals);
    for i, v in enumerate(vals):
    ax.textual content(v - 150, i - 0.15, str(v), shade = "white", fontweight = 'daring')
    plt.tick_params(
    axis = "x", which = "each", backside = False, high = False, labelbottom = False
    )
    plt.title("Prime Genres")
    plt.tight_layout()

    One Sizzling Encoding for A number of Labels:

    Unique_genre incorporates all of the distinctive genres current within the knowledge. However how will we come to know a film belongs to precisely which style? That is vital to have the ability to classify the flicks primarily based on their genres.

    Let’s create a brand new column genre_bin that can maintain the binary values whether or not the film belongs to which style. We will do that by making a binaryList that later will probably be helpful to categorise the same motion pictures collectively.

    This methodology will take the genre_list of films and for every style that’s current it should append 1 within the record else 0. Let’s assume there are solely 6 attainable style. So, if the film style is motion and thriller then the record generated will probably be [1, 1, 0, 0, 0, 0].

    If the film is comedy then the record generated will probably be [0, 0, 1, 0, 0, 0]

    def binary(genre_list):
    binaryList = []
    for style in unique_genres:
    if style in genre_list:
    binaryList.append(1)
    else:
    binaryList.append(0)
    return binaryList
    knowledge['genres_bin'] = knowledge['genres'].apply(lambda x : binary(x))
    knowledge['genres_bin'].head()

    We are going to comply with the identical notation for the remaining columns like forged, director, production_companies and key phrases.

    Step 5: Working with Forged Column

    We start by cleansing the forged column to the forged record.

    knowledge['cast'] = knowledge['cast'].str.strip('[]').str.exchange(" ","").str.exchange("'","").str.exchange('"',"").str.cut up(",")

    Adopted by which we generate the collection that shops the forged names and the rely of the appearances within the motion pictures. We choose the highest 15 casts.

    # Eradicating the Clean empty house from record
    def remove_space(list1, merchandise):
    res = [i for i in list1 if i != item]
    return res

    list1 = record()
    for i in knowledge['cast']:
    list1.lengthen(i)

    list1 = remove_space(list1, "")
    collection = pd.Collection(list1).value_counts()[:15].sort_values(ascending = True)

    Then we plot a bar chart displaying Prime 15 Actors with the Highest Appearances to find out the recognition of the film by the Actor.

    fig, ax = plt.subplots(figsize = (8, 5));
    collection.plot.barh(width = 0.8, shade = "#335896");
    for i, v in enumerate(collection.values):
    ax.textual content(v-3, i - 0.2, str(v), fontweight = 'daring', fontsize = 'medium', shade = "white")

    plt.tick_params(
    axis = "x", which = "each", backside = False, high = False, labelbottom = False
    )
    plt.title("Actors with Highest Look")
    plt.tight_layout()

    One factor we have to observe is that do we actually want to provide preferences to all of the forged? Initially once I created the record it had virtually 50k+ values. Do we have to contemplate all? The reply is No. We will simply choose the Prime 4 forged for every film.

    Now how will we decide which actor has contributed essentially the most!!? Fortunately the order of values saved relies on significance. So we merely slice the primary 4 values from the forged record for every film.

    Then just like above step do one scorching label encoding to find out which actor has acted by which film.

    for i, j in zip(knowledge['cast'], knowledge.index):
    list2 = []
    list2 = i[:4]
    knowledge.loc[j, 'cast'] = str(list2)
    knowledge['cast'] = knowledge['cast'].str.strip('[]').str.exchange(' ', '').str.exchange("'",'').str.cut up(",")
    for i, j in zip(knowledge['cast'], knowledge.index):
    list2 = []
    list2 = i
    list2.type()
    knowledge.loc[j, 'cast'] = str(list2)

    knowledge['cast'] = knowledge['cast'].str.strip('[]').str.exchange(' ', '').str.exchange("'",'').str.cut up(",")
    castlist = []

    for index, row in knowledge.iterrows():
    forged = row['cast']
    for i in forged:
    if i not in castlist:
    castlist.append(i)

    def binary(cast_list):
    binaryList = record()
    for forged in castlist:
    if forged in cast_list:
    binaryList.append(1)
    else:
    binaryList.append(0)
    return binaryList
    knowledge['cast_bin'] = knowledge['cast'].apply(lambda x : binary(x))
    knowledge['cast_bin'].head()

    Step 6: Working with Director Column

    Now we work with the director column by creating the record of all the administrators and the no. of films they’ve directed.

    def xstr(director):
    if director is None:
    return ''
    return str(director)

    knowledge['director'] = knowledge['director'].apply(xstr)
    list1 = record()

    for x in knowledge['director']:
    list1.append(x)

    director_list = record(pd.Collection(list1).value_counts().index)
    collection = pd.Collection(list1).value_counts()[:10][1:].sort_values(ascending=True)

    Making a barplot for a similar.

    fig, ax = plt.subplots(figsize = (7,4));
    collection.plot.barh(width = 0.8, shade = "#335896");
    for i, v in enumerate(collection.values):
    ax.textual content(v-1.5, i - 0.2, str(v), fontweight = 'daring', fontsize = 'massive', shade = "white")

    plt.tick_params(axis = "x", which = "each", backside = False, high = False, labelbottom = False)
    plt.title("Administrators with Highest Motion pictures")
    plt.tight_layout()

    Making a director_bin to retailer the binary record.

    def binary(x):
    binaryList = []
    for director in director_list:
    if x == director:
    binaryList.append(1)
    else:
    binaryList.append(0)
    return binaryList

    knowledge['director_bin'] = knowledge['director'].apply(lambda x : binary(x))

    Equally now we have labored with production_companies and production_countries columns.

    Step 7: Working with Key phrases Column

    We’ll deal with key phrase columns somewhat otherwise since it’s an important attribute because it helps to find out which two motion pictures are associated to one another. For e.g., Motion pictures like “Avengers” and “Ant-man” might have widespread key phrases like superheroes or Marvel.

    For analyzing key phrases, we’ll attempt to make a phrase cloud to get a greater intuition-

    from wordcloud import WordCloud, STOPWORDS
    import nltk
    from nltk.corpus import stopwords

    stop_words = set(stopwords.phrases("english"))
    stop_words.replace('', ' ', ',', '.', '/', '"', "'", '<', '…', '(', ')', '*', '&', '^', '%', '$', '#','@', '!', '[', ']')
    phrases = knowledge['keywords'].dropna().astype('str').apply(lambda x : nltk.word_tokenize(x))
    phrase = []

    for i in phrases:
    for j in i:
    if j not in stop_words:
    phrase.append(j.strip("'"))

    wc = WordCloud(stopwords = stop_words, max_words = 2000,max_font_size = 40, peak = 500, width = 500)
    wc.generate(" ".be part of(phrase))
    plt.imshow(wc)
    plt.axis("off")
    fig=plt.gcf()
    fig.set_size_inches(12,8)

    Now we are going to create word_bin column as comply with:

    knowledge['keywords'] = knowledge['keywords'].str.strip('[]').str.exchange(' ','').str.exchange("'",'').str.exchange('"','')
    knowledge['keywords'] = knowledge['keywords'].str.cut up(',')

    for i,j in zip(knowledge['keywords'],knowledge.index):
    list2 = []
    list2 = i
    knowledge.loc[j,'keywords'] = str(list2)

    knowledge['keywords'] = knowledge['keywords'].str.strip('[]').str.exchange(' ','').str.exchange("'",'')
    knowledge['keywords'] = knowledge['keywords'].str.cut up(',')

    for i,j in zip(knowledge['keywords'],knowledge.index):
    list2 = []
    list2 = i
    list2.type()
    knowledge.loc[j,'keywords'] = str(list2)
    knowledge['keywords'] = knowledge['keywords'].str.strip('[]').str.exchange(' ','').str.exchange("'",'')
    knowledge['keywords'] = knowledge['keywords'].str.cut up(',')

    words_list = []
    for index, row in knowledge.iterrows():
    genres = row["keywords"]
    for style in genres:
    if style not in words_list:
    words_list.append(style)
    def binary(key phrase):
    binaryList = []
    for phrase in words_list:
    if phrase in key phrase:
    binaryList.append(1)
    else:
    binaryList.append(0)
    return binaryList

    knowledge['keywords_bin'] = knowledge['keywords'].apply(lambda x : binary(x))

    Step 8: Dropping the data

    On this step we filter our knowledge by dropping the data the place the average_rating and runtime is 0 for higher prediction and evaluation.

    knowledge = knowledge[data['vote_average'] != 0.0]
    knowledge = knowledge[data['runtime'] != 0.0]
    knowledge.head(2)

    Step 9: Discovering the Cosine Similarity

    To seek out the similarity between the flicks we are going to use cosine similarity. Let’s perceive in short the way it works.

    Suppose you’ve 2 vectors in house. If the angle made between the vectors is 0 diploma then the 2 vectors are comparable to one another since cos(0) is 1. If the angle made between the vectors is 90 diploma, it means each vectors are orthogonal to one another. And thus the 2 vectors are completely different since cos(90) is 0.

    Cosine Similarity

    Let’s see learn how to implement this in code:

    from scipy import spatial
    def similarity(movie_id1, movie_id2):
    a = knowledge.iloc[movie_id1]
    b = knowledge.iloc[movie_id2]

    genreA = a['genres_bin']
    genreB = b['genres_bin']

    genre_score = spatial.distance.cosine(genreA, genreB)
    # print(f"Style Rating: {genre_score}")
    scoreA = a['cast_bin']
    scoreB = b['cast_bin']

    cast_score = spatial.distance.cosine(scoreA, scoreB)
    # print(f"Forged Rating: {cast_score}")
    dirA = a['director_bin']
    dirB = b['director_bin']

    direct_score = spatial.distance.cosine(dirA, dirB)
    # print(f"Director Rating: {direct_score}")
    # prodA = a['prod_companies_bin']
    # prodB = b['prod_companies_bin']

    # prod_score = spatial.distance.cosine(prodA, prodB)
    wordA = a['keywords_bin']
    wordB = b['keywords_bin']
    keyword_score = spatial.distance.cosine(wordA, wordB)

    # print(f"Key phrase Rating: {keyword_score}")
    return genre_score + cast_score + direct_score + keyword_score

    Now we measure the similarity between the flicks.

    id1 = 95
    id2 = 96
    similarity(id1, id2)

    Each motion pictures are completely different so the similarity rating is excessive.

    Step 10: Predicting the Score

    Now since many of the activity is finished now, we’ll implement a way to foretell the ranking of the bottom film and advocate different motion pictures just like base motion pictures.

    On this methodology, Similarity() performs a pivotal position the place we calculate the similarity rating between all the flicks and return the Prime 10 motion pictures with lowest distance. We are going to take the common of all these 10 motion pictures and calculate the expected ranking of the bottom film.

    Right here, the bins come to the play. Now we have created the bins of vital options in order to calculate the similarity between the flicks. We all know that options like director and forged will play an important position in film’s success and thus the person’s preferring Cristopher Nolan’s film can even want David Fincher if they like to work with their favourite actors.

    Utilizing this phenomena, we’ll construct the rating predictor.

    new_id = record(vary(0, knowledge.form[0]))
    knowledge['new_id'] = new_id
    knowledge.columns
    cols = ['new_id', 'genres', 'original_title', 'director', 'vote_average', 'cast', 'genres_bin',
    'cast_bin', 'director_bin', 'prod_companies_bin', 'keywords_bin']
    knowledge = knowledge[cols]
    import operator

    def predict_score(title):
    new_movie = knowledge[data['original_title'].str.incorporates(title, case=False, na=False)].iloc[0].to_frame().T
    print(f"nSelected Film: {new_movie.original_title.values[0]}")

    def getNeighbors(base_movie, Okay):
    distances = []
    for index, row in knowledge.iterrows():
    if row['new_id'] != base_movie['new_id'].values[0]:
    dist = similarity(row['new_id'], base_movie['new_id'].values[0])
    distances.append((row['new_id'], dist))
    distances.type(key=operator.itemgetter(1))
    return distances[:K] # Instantly return the highest Okay neighbors

    Okay = 10
    avgRating = 0
    neighbors = getNeighbors(new_movie, Okay)
    print("nRecommended Motion pictures: n")

    for neighbor in neighbors:
    ranking = knowledge.iloc[neighbor[0]][4] # Extract ranking
    avgRating += float(ranking) # Convert to drift first, then int
    movie_title = knowledge.iloc[neighbor[0]][2]
    genres = str(knowledge.iloc[neighbor[0]][1]).strip('[]').exchange(' ', '')
    print(f"{movie_title} | Genres: {genres} | Score: {ranking}")
    print("n")

    avgRating /= Okay
    actual_rating = float(new_movie['vote_average'].values[0]) # Guarantee float conversion
    print(f"The Predicted Score for {new_movie['original_title'].values[0]} is {spherical(avgRating, 2)}")
    print(f"The Precise Score for {new_movie['original_title'].values[0]} is {spherical(actual_rating, 2)}")

    Now merely name the tactic together with your favourite film title to get the suggestions for the Prime 10 comparable motion pictures.

    predict_score("Interstellar")

    Thus, now we have accomplished the Film Advice System and Score Prediction utilizing the Okay-Nearest Algorithm.

    Checkout the detailed code right here:

    https://www.kaggle.com/code/akankshagupta970/movie-recommendation-using-knn



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleTesla shares slump after European sales fall
    Next Article Is Python Set to Surpass Its Competitors?
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025

    July 1, 2025
    Machine Learning

    Meanwhile in Europe: How We Learned to Stop Worrying and Love the AI Angst | by Andreas Maier | Jul, 2025

    July 1, 2025
    Machine Learning

    Handling Big Git Repos in AI Development | by Rajarshi Karmakar | Jul, 2025

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    If You’re Using ChatGPT This Way, You’re Doing It Wrong

    April 18, 2025

    Disney’s former head of innovation shares secrets for boosting your creativity

    January 5, 2025

    The retro hobby that can help boost happiness levels

    December 26, 2024
    Our Picks

    How This Man Grew His Beverage Side Hustle From $1k a Month to 7 Figures

    July 1, 2025

    Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025

    July 1, 2025

    How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.