Close Menu
    Trending
    • Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025
    • AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?
    • Why Your Finance Team Needs an AI Strategy, Now
    • How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1
    • From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    • Cuba’s Energy Crisis: A Systemic Breakdown
    • AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Sesame  Speech Model:  How This Viral AI Model Generates Human-Like Speech
    Artificial Intelligence

    Sesame  Speech Model:  How This Viral AI Model Generates Human-Like Speech

    Team_AIBS NewsBy Team_AIBS NewsApril 12, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    revealed a demo of their newest Speech-to-Speech mannequin. A conversational AI agent who’s actually good at talking, they supply related solutions, they communicate with expressions, and actually, they’re simply very enjoyable and interactive to play with.

    Observe {that a} technical paper is just not out but, however they do have a short blog post that gives numerous details about the methods they used and former algorithms they constructed upon. 

    Fortunately, they supplied sufficient info for me to write down this text and make a YouTube video out of it. Learn on!

    Coaching a Conversational Speech Mannequin

    Sesame is a Conversational Speech Mannequin, or a CSM. It inputs each textual content and audio, and generates speech as audio. Whereas they haven’t revealed their coaching information sources within the articles, we will nonetheless attempt to take a stable guess. The weblog publish closely cites one other CSM, 2024’s Moshi, and happily, the creators of Moshi did reveal their information sources of their paper. Moshi makes use of 7 million hours of unsupervised speech information, 170 hours of pure and scripted conversations (for multi-stream coaching), and 2000 extra hours of phone conversations (The Fischer Dataset).


    Sesame builds upon the Moshi Paper (2024)

    However what does it actually take to generate audio?

    In uncooked kind, audio is only a lengthy sequence of amplitude values — a waveform. For instance, if you happen to’re sampling audio at 24 kHz, you’re capturing 24,000 float values each second.

    There are 24000 values right here to symbolize 1 second of speech! (Picture generated by writer)

    In fact, it’s fairly resource-intensive to course of 24000 float values for only one second of knowledge, particularly as a result of transformer computations scale quadratically with sequence size. It will be nice if we may compress this sign and cut back the variety of samples required to course of the audio.

    We are going to take a deep dive into the Mimi encoder and particularly Residual Vector Quantizers (RVQ), that are the spine of Audio/Speech modeling in Deep Learning right this moment. We are going to finish the article by studying about how Sesame generates audio utilizing its particular dual-transformer structure.

    Preprocessing audio

    Compression and have extraction are the place convolution helps us. Sesame makes use of the Mimi speech encoder to course of audio. Mimi was launched within the aforementioned Moshi paper as nicely. Mimi is a self-supervised audio encoder-decoder mannequin that converts audio waveforms into discrete “latent” tokens first, after which reconstructs the unique sign. Sesame solely makes use of the encoder part of Mimi to tokenize the enter audio tokens. Let’s find out how.

    Mimi inputs the uncooked speech waveform at 24Khz, passes them by way of a number of strided convolution layers to downsample the sign, with a stride issue of 4, 5, 6, 8, and a couple of. Which means the primary CNN block downsamples the audio by 4x, then 5x, then 6x, and so forth. In the long run, it downsamples by an element of 1920, lowering it to only 12.5 frames per second.

    The convolution blocks additionally venture the unique float values to an embedding dimension of 512. Every embedding aggregates the native options of the unique 1D waveform. 1 second of audio is now represented as round 12 vectors of dimension 512. This fashion, Mimi reduces the sequence size from 24000 to only 12 and converts them into dense steady vectors.

    Earlier than making use of any quantization, the Mimi Encoder downsamples the enter 24KHz audio by 1920 occasions, and embeds it into 512 dimensions. In different phrases, you get 12.5 frames per second with every body as a 512-dimensional vector. (Image from author’s video)

    What’s Audio Quantization?

    Given the continual embeddings obtained after the convolution layer, we need to tokenize the enter speech. If we will symbolize speech as a sequence of tokens, we will apply customary language studying transformers to coach generative fashions.

    Mimi makes use of a Residual Vector Quantizer or RVQ tokenizer to realize this. We are going to speak concerning the residual half quickly, however first, let’s have a look at what a easy vanilla Vector quantizer does.

    Vector Quantization

    The thought behind Vector Quantization is easy: you practice a codebook , which is a group of, say, 1000 random vector codes all of dimension 512 (similar as your embedding dimension).

    A Vanilla Vector Quantizer. A codebook of embeddings is educated. Given an enter embedding, we map/quantize it to the closest codebook entry. (Screenshot from author’s video)

    Then, given the enter vector, we’ll map it to the closest vector in our codebook — mainly snapping a degree to its nearest cluster middle. This implies we’ve successfully created a hard and fast vocabulary of tokens to symbolize every audio body, as a result of regardless of the enter body embedding could also be, we’ll symbolize it with the closest cluster centroid. If you wish to study extra about Vector Quantization, try my video on this matter the place I’m going a lot deeper with this.

    Extra about Vector Quantization! (Video by writer)

    Residual Vector Quantization

    The issue with easy vector quantization is that the lack of info could also be too excessive as a result of we’re mapping every vector to its cluster’s centroid. This “snap” isn’t excellent, so there’s all the time an error between the unique embedding and the closest codebook.

    The massive thought of Residual Vector Quantization is that it doesn’t cease at having only one codebook. As a substitute, it tries to make use of a number of codebooks to symbolize the enter vector.

    1. First, you quantize the unique vector utilizing the primary codebook.
    2. Then, you subtract that centroid out of your unique vector. What you’re left with is the residual — the error that wasn’t captured within the first quantization.
    3. Now take this residual, and quantize it once more, utilizing a second codebook full of name new code vectors — once more by snapping it to the closest centroid.
    4. Subtract that too, and also you get a smaller residual. Quantize once more with a 3rd codebook… and you’ll preserve doing this for as many codebooks as you need.
    Residual Vector Quantizers (RVQ) hierarchically encode the enter embeddings through the use of a brand new codebook and VQ layer to symbolize the earlier codebook’s error. (Illustration by the writer)

    Every step hierarchically captures just a little extra element that was missed within the earlier spherical. If you happen to repeat this for, let’s say, N codebooks, you get a group of N discrete tokens from every stage of quantization to symbolize one audio body.

    The good factor about RVQs is that they’re designed to have a excessive inductive bias in direction of capturing essentially the most important content material within the very first quantizer. Within the subsequent quantizers, they study an increasing number of fine-grained options.

    If you happen to’re conversant in PCA, you’ll be able to consider the primary codebook as containing the first principal elements, capturing essentially the most vital info. The following codebooks symbolize higher-order elements, containing info that provides extra particulars.

    Residual Vector Quantizers (RVQ) makes use of a number of codebooks to encode the enter vector — one entry from every codebook. (Screenshot from author’s video)

    Acoustic vs Semantic Codebooks

    Since Mimi is educated on the duty of audio reconstruction, the encoder compresses the sign to the discretized latent area, and the decoder reconstructs it again from the latent area. When optimizing for this job, the RVQ codebooks study to seize the important acoustic content material of the enter audio contained in the compressed latent area. 

    Mimi additionally individually trains a single codebook (vanilla VQ) that solely focuses on embedding the semantic content material of the audio. That is why Mimi known as a split-RVQ tokenizer – it divides the quantization course of into two impartial parallel paths: one for semantic info and one other for acoustic info.

    The Mimi Structure (Supply: Moshi paper) License: Free

    To coach semantic representations, Mimi used information distillation with an current speech mannequin referred to as WavLM as a semantic instructor. Principally, Mimi introduces an extra loss perform that decreases the cosine distance between the semantic RVQ code and the WavLM-generated embedding.


    Audio Decoder

    Given a dialog containing textual content and audio, we first convert them right into a sequence of token embeddings utilizing the textual content and audio tokenizers. This token sequence is then enter right into a transformer mannequin as a time collection. Within the weblog publish, this mannequin is known as the Autoregressive Spine Transformer. Its job is to course of this time collection and output the “zeroth” codebook token.

    A lighterweight transformer referred to as the audio decoder then reconstructs the subsequent codebook tokens conditioned on this zeroth code generated by the spine transformer. Observe that the zeroth code already incorporates numerous details about the historical past of the dialog because the spine transformer has visibility of the whole previous sequence. The light-weight audio decoder solely operates on the zeroth token and generates the opposite N-1 codes. These codes are generated through the use of N-1 distinct linear layers that output the likelihood of selecting every code from their corresponding codebooks. 

    You’ll be able to think about this course of as predicting a textual content token from the vocabulary in a text-only LLM. Simply {that a} text-based LLM has a single vocabulary, however the RVQ-tokenizer has a number of vocabularies within the type of the N codebooks, so you could practice a separate linear layer to mannequin the codes for every.

    The Sesame Structure (Illustration by the writer)

    Lastly, after the codewords are all generated, we combination them to kind the mixed steady audio embedding. The ultimate job is to transform this audio again to a waveform. For this, we apply transposed convolutional layers to upscale the embedding again from 12.5 Hz again to KHz waveform audio. Principally, reversing the transforms we had utilized initially throughout audio preprocessing.

    In Abstract

    Try the accompanying video on this text! (Video by writer)

    So, right here is the general abstract of the Sesame mannequin in some bullet factors.

    1.  Sesame is constructed on a multimodal Dialog Speech Mannequin or a CSM.
    2. Textual content and audio are tokenized collectively to kind a sequence of tokens and enter into the spine transformer that autoregressively processes the sequence.
    3. Whereas the textual content is processed like some other text-based LLM, the audio is processed instantly from its waveform illustration. They use the Mimi encoder to transform the waveform into latent codes utilizing a cut up RVQ tokenizer.
    4. The multimodal spine transformers eat a sequence of tokens and predict the subsequent zeroth codeword.
    5.  One other light-weight transformer referred to as the Audio Decoder predicts the subsequent codewords from the zeroth codeword.
    6. The ultimate audio body illustration is generated from combining all of the generated codewords and upsampled again to the waveform illustration.

    Thanks for studying!

    References and Should-read papers

    Check out my ML YouTube Channel

    Sesame Blogpost and Demo

    Related papers: 
    Moshi: https://arxiv.org/abs/2410.00037 
    SoundStream: https://arxiv.org/abs/2107.03312 
    HuBert: https://arxiv.org/abs/2106.07447 
    Speech Tokenizer: https://arxiv.org/abs/2308.16692




    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleNeural network low accuracy | by Ted James | Apr, 2025
    Next Article Fintech Company Stripe Invites Customers to Attend Meetings
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025
    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025

    July 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    What Every Brand Gets Wrong About Using AI

    May 31, 2025

    Trump wants to dismantle the agency that provides weather forecasts. It will make your life worse

    February 10, 2025

    AI Agents Processing Time Series and Large Dataframes

    April 22, 2025
    Our Picks

    Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025

    July 2, 2025

    AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?

    July 2, 2025

    Why Your Finance Team Needs an AI Strategy, Now

    July 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.