Close Menu
    Trending
    • The Real Reason AI Isn’t Working at Your Company — and the 3-Step Fix to Change That
    • Tried TradeSanta So You Don’t Have To: My Honest Review
    • The GPT-5 Revolution: AI That Thinks, Learns, and Creates Like Never Before | by Hash Block | Aug, 2025
    • Elon Musk Warns: OpenAI Will ‘Eat Microsoft Alive’
    • I Tested Fantasy GF Video Generator for 1 Month
    • How AI Agents Will Replace Apps:. The Future of User Interfaces in 2025 | by Brainstorm_delight | Write A Catalyst | Aug, 2025
    • Pornographic Taylor Swift deepfakes generated by Musk’s Grok AI
    • I Asked ChatGPT’s New Agent What to Post Next — It Got 50,000 Views in 48 Hours
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Demystifying Cosine Similarity | Towards Data Science
    Artificial Intelligence

    Demystifying Cosine Similarity | Towards Data Science

    Team_AIBS NewsBy Team_AIBS NewsAugust 8, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    is a generally used metric for operationalizing duties equivalent to semantic search and doc comparability within the discipline of pure language processing (NLP). Introductory NLP programs usually present solely a high-level justification for utilizing cosine similarity in such duties (versus, say, Euclidean distance) with out explaining the underlying arithmetic, leaving many information scientists with a quite imprecise understanding of the subject material. To deal with this hole, the next article lays out the mathematical instinct behind the cosine similarity metric and exhibits how this may help us interpret ends in observe with hands-on examples in Python.

    Be aware: All figures and formulation within the following sections have been created by the creator of this text.

    Mathematical Instinct 

    The cosine similarity metric relies on the cosine perform that readers might recall from highschool math. The cosine perform displays a repeating wavelike sample, a full cycle of which is depicted in Determine 1 beneath for the vary 0 <= x <= 2*pi. The Python code used to supply the determine can be included for reference.

    import numpy as np
    import matplotlib.pyplot as plt
    
    # Outline the x vary from 0 to 2*pi
    x = np.linspace(0, 2 * np.pi, 500)
    y = np.cos(x)
    
    # Create the plot
    plt.determine(figsize=(8, 4))
    plt.plot(x, y, label='cos(x)', coloration='blue')
    
    # Add notches on the x-axis at pi/2 and three*pi/2
    notch_positions = [0, np.pi/2, np.pi, 3*np.pi/2, 2*np.pi]
    notch_labels = ['0', 'pi/2', 'pi', '3*pi/2', '2*pi']
    plt.xticks(ticks=notch_positions, labels=notch_labels)
    
    # Add customized horizontal gridlines solely at y = -1, 0, 1
    for y_val in [-1, 0, 1]:
        plt.axhline(y=y_val, coloration='grey', linestyle='--', linewidth=0.5)
    
    # Add vertical gridlines at specified x-values
    for x_val in notch_positions:
        plt.axvline(x=x_val, coloration='grey', linestyle='--', linewidth=0.5)
    
    # Customise the plot
    plt.xlabel("x")
    plt.ylabel("cos(x)")
    
    # Closing structure and show
    plt.tight_layout()
    plt.present()
    Determine 1: Cosine Operate

    The perform parameter x denotes an angle in radians (e.g., the angle between two vectors in an embedding area), the place pi/2, pi, 3*pi/2, and a couple of*pi, are 90, 180, 270, and 360 levels, respectively.

    To grasp why the cosine perform can function a helpful foundation for designing a vector similarity metric, discover that the fundamental cosine perform, with none useful transformations as proven in Determine 1, has maxima at x = 2*a*pi, minima at x = (2*b + 1)*pi, and roots at x = (c + 1/2)*pi for some integers a, b, and c. In different phrases, if x denotes the angle between two vectors, cos(x) returns the most important worth when the vectors level in the identical path, the smallest worth when the vectors level in reverse instructions, and 0 when the vectors are orthogonal to one another.

    This conduct of the cosine perform neatly captures the interaction between two key ideas in NLP: semantic overlap (conveying how a lot that means is shared between two texts) and semantic polarity (capturing the oppositeness of that means in texts). For instance, the texts “I preferred this film” and “I loved this movie” would have excessive semantic overlap (they categorical primarily the identical that means regardless of utilizing completely different phrases) and low semantic polarity (they don’t categorical reverse meanings). Now, if the embedding vectors for 2 phrases occur to encode each semantic overlap and polarity, then we’d count on synonyms to have cosine similarity approaching 1, antonyms to have cosine similarity approaching -1, and unrelated phrases to have cosine similarity approaching 0.

    In observe, we’ll usually not know the angle x straight. As a substitute, we should derive the cosine worth from the vectors themselves. Given two vectors U and V, every with n components, the cosine of the angle between these vectors — equal to the cosine similarity metric — is computed because the dot product of the vectors divided by the product of the vector magnitudes:

    The above method for the cosine of the angle between two vectors could be derived from the so-called Cosine Rule, as demonstrated within the phase between minutes 12 and 18 of this video:

    A neat proof of the Cosine Rule itself is introduced on this video:

    The next Python implementation of cosine similarity explicitly operationalizes the formulation introduced above, with out counting on any black-box, third-party packages:

    import math
    
    def cosine_similarity(U, V):
        if len(U) != len(V):
            elevate ValueError("Vectors have to be of the identical size.")
    
        # Compute dot product and magnitudes
        dot_product = sum(u * v for u, v in zip(U, V))
        magnitude_U = math.sqrt(sum(u ** 2 for u in U))
        magnitude_V = math.sqrt(sum(v ** 2 for v in V))
        
        # Zero vector dealing with to keep away from division by zero
        if magnitude_U == 0 or magnitude_V == 0:
            elevate ValueError("Can not compute cosine similarity for zero-magnitude vectors.")
    
        return dot_product / (magnitude_U * magnitude_V)

    readers can consult with this article for a extra environment friendly Python implementation of the cosine distance metric (outlined as 1 minus cosine similarity) utilizing the NumPy and SciPy packages.

    Lastly, it’s value evaluating the mathematical instinct of cosine similarity (or distance) with that of Euclidean distance, which measures the linear distance between two vectors and may function a vector similarity metric. Specifically, the decrease the Euclidean distance between two vectors, the upper their semantic similarity is prone to be. The Euclidean distance between two vectors U and V (every of size n) could be computed utilizing the next method:

    Beneath is the corresponding Python implementation:

    import math
    
    def euclidean_distance(U, V):
        if len(U) != len(V):
            elevate ValueError("Vectors have to be of the identical size.")
    
        # Compute sum of squared variations
        sum_squared_diff = sum((u - v) ** 2 for u, v in zip(U, V))
    
        # Take the sq. root of the sum
        return math.sqrt(sum_squared_diff)

    Discover that, for the reason that elementwise variations within the Euclidean distance method are squared, the ensuing metric will all the time be a non-negative quantity — zero if the vectors are an identical, optimistic in any other case. Within the NLP context, this means that Euclidean distance won’t replicate semantic polarity in fairly the identical approach as cosine distance does. Furthermore, so long as two vectors level in the identical path, the cosine of the angle between them will stay the identical whatever the vector magnitudes. In contrast, the Euclidean distance metric is affected by variations in vector magnitude, which can result in deceptive interpretations in observe (e.g., two texts of various lengths might yield a excessive Euclidean distance regardless of being semantically comparable). As such, cosine similarity is the popular metric in lots of NLP eventualities, the place figuring out vector — or semantic — directionality is the first concern.

    Idea versus Observe

    In a sensible NLP situation, the interpretation of cosine similarity hinges on the extent to which the vector embedding encodes polarity in addition to semantic overlap. Within the following hands-on instance, we’ll examine the similarity between two given phrases utilizing a pretrained embedding mannequin that doesn’t encode polarity (all-MiniLM-L6-v2) and one which does (distilbert-base-uncased-finetuned-sst-2-english). We can even use extra environment friendly implementations of cosine similarity and Euclidean distance by leveraging features offered by the SciPy package deal.

    from scipy.spatial.distance import cosine as cosine_distance
    from sentence_transformers import SentenceTransformer
    from transformers import AutoTokenizer, AutoModel
    import torch
    
    # Phrases to embed
    phrases = ["movie", "film", "good", "bad", "spoon", "car"]
    
    # Load a pre-trained embedding mannequin from Hugging Face
    model_1 = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
    model_2_name = "distilbert-base-uncased-finetuned-sst-2-english"
    model_2_tokenizer = AutoTokenizer.from_pretrained(model_2_name)
    model_2 = AutoModel.from_pretrained(model_2_name)
    
    # Generate embeddings for mannequin 1
    embeddings_1 =  dict(zip(phrases, model_1.encode(phrases)))
    
    # Generate embeddings for mannequin 2
    inputs = model_2_tokenizer(phrases, padding=True, truncation=True, return_tensors="pt")
    with torch.no_grad():
        outputs = model_2(**inputs)
        embedding_vectors_model_2 = outputs.last_hidden_state.imply(dim=1)
    embeddings_2 = {phrase: vector for phrase, vector in zip(phrases, embedding_vectors_model_2)}
    
    # Compute and print cosine similarity (1 - cosine distance) for each embedding fashions
    print("Cosine similarity for embedding mannequin 1:")
    print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_1["movie"], embeddings_1["film"]))
    print("good", "t", "unhealthy", "t", 1 - cosine_distance(embeddings_1["good"], embeddings_1["bad"]))
    print("spoon", "t", "automobile", "t", 1 - cosine_distance(embeddings_1["spoon"], embeddings_1["car"]))
    print()
    
    print("Cosine similarity for embedding mannequin 2:")
    print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_2["movie"], embeddings_2["film"]))
    print("good", "t", "unhealthy", "t", 1 - cosine_distance(embeddings_2["good"], embeddings_2["bad"]))
    print("spoon", "t", "automobile", "t", 1 - cosine_distance(embeddings_2["spoon"], embeddings_2["car"]))
    print()

    Output:

    Cosine similarity for embedding mannequin 1:
    film 	 movie 	 0.8426464702276286
    good 	 unhealthy 	 0.5871497042685934
    spoon 	 automobile 	 0.22919675707817078
    
    Cosine similarity for embedding mannequin 2:
    film 	 movie 	 0.9638281550070811
    good 	 unhealthy 	 -0.3416433451550165
    spoon 	 automobile 	 0.5418748837234599

    The phrases “film” and “movie”, that are usually used as synonyms, have cosine similarity near 1, suggesting excessive semantic overlap as anticipated. The phrases “good” and “unhealthy” are antonyms, and we see this mirrored within the destructive cosine similarity end result when utilizing the second embedding mannequin identified to encode semantic polarity. Lastly, the phrases “spoon” and “automobile” are semantically unrelated, and the corresponding orthogonality of their vector embeddings is indicated by their cosine similarity outcomes being nearer to zero than for “film” and “movie”.

    The Wrap

    The cosine similarity between two vectors relies on the cosine of the angle they type, and — in contrast to metrics equivalent to Euclidean distance — just isn’t delicate to variations in vector magnitudes. In principle, cosine similarity must be near 1 if the vectors level in the identical path (indicating excessive similarity), near -1 if the vectors level in reverse instructions (indicating excessive dissimilarity), and near 0 if the vectors are orthogonal (indicating unrelatedness). Nonetheless, the precise interpretation of cosine similarity in a given NLP situation relies on the character of the embedding mannequin used to vectorize the textual information (e.g., whether or not the embedding mannequin encodes polarity along with semantic overlap).



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhy Your ‘Simple’ Models Are Actually Hidden Gems | by Krish Matai | Aug, 2025
    Next Article Apple CEO Tim Cook Says He Wants to Buy Startups
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Tried TradeSanta So You Don’t Have To: My Honest Review

    August 9, 2025
    Artificial Intelligence

    I Tested Fantasy GF Video Generator for 1 Month

    August 9, 2025
    Artificial Intelligence

    Generating Structured Outputs from LLMs

    August 8, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Real Reason AI Isn’t Working at Your Company — and the 3-Step Fix to Change That

    August 9, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    From the Grimoire: Computer Vision (Part II) | by Misha Nivota | Mar, 2025

    March 28, 2025

    TikTok True Crime to Stream: ‘Dancing for the Devil’ and More

    January 31, 2025

    AI-Driven Feedback Systems vs. Traditional Feedback Methods | by Shane Jackson | Feb, 2025

    February 13, 2025
    Our Picks

    The Real Reason AI Isn’t Working at Your Company — and the 3-Step Fix to Change That

    August 9, 2025

    Tried TradeSanta So You Don’t Have To: My Honest Review

    August 9, 2025

    The GPT-5 Revolution: AI That Thinks, Learns, and Creates Like Never Before | by Hash Block | Aug, 2025

    August 9, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.