Close Menu
    Trending
    • Boost Team Productivity and Security With Windows 11 Pro, Now $15 for Life
    • 10 Common SQL Patterns That Show Up in FAANG Interviews | by Rohan Dutt | Aug, 2025
    • This Mac and Microsoft Bundle Pays for Itself in Productivity
    • Candy AI NSFW AI Video Generator: My Unfiltered Thoughts
    • Anaconda : l’outil indispensable pour apprendre la data science sereinement | by Wisdom Koudama | Aug, 2025
    • Automating Visual Content: How to Make Image Creation Effortless with APIs
    • A Founder’s Guide to Building a Real AI Strategy
    • Starting Your First AI Stock Trading Bot
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Hitchhiker’s Guide to RAG: From Tiny Files to Tolstoy with OpenAI’s API and LangChain
    Artificial Intelligence

    Hitchhiker’s Guide to RAG: From Tiny Files to Tolstoy with OpenAI’s API and LangChain

    Team_AIBS NewsBy Team_AIBS NewsJuly 11, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    , I walked you through setting up a very simple RAG pipeline in Python, utilizing OpenAI’s API, LangChain, and your native information. In that put up, I cowl the very fundamentals of making embeddings out of your native information with LangChain, storing them in a vector database with FAISS, making API calls to OpenAI’s API, and in the end producing responses related to your information. 🌟

    Picture by writer

    Nonetheless, on this easy instance, I solely exhibit find out how to use a tiny .txt file. On this put up, I additional elaborate on how one can make the most of bigger information along with your RAG pipeline by including an additional step to the method — chunking.

    What about chunking?

    Chunking refers back to the strategy of parsing a textual content into smaller items of textual content—chunks—which might be then reworked into embeddings. This is essential as a result of it permits us to successfully course of and create embeddings for bigger information. All embedding fashions include numerous limitations on the dimensions of the textual content that’s handed — I’ll get into extra particulars about these limitations in a second. These limitations permit for higher efficiency and low-latency responses. Within the case that the textual content we offer doesn’t meet these measurement limitations, it’ll get truncated or rejected.

    If we wished to create a RAG pipeline studying, say from Leo Tolstoy’s War and Peace textual content (a somewhat giant e-book), we wouldn’t be capable to immediately load it and remodel it right into a single embedding. As a substitute, we have to first do the chunking — create smaller chunks of textual content, and create embeddings for each. Every chunk being beneath the dimensions limits of no matter embedding mannequin we use permits us to successfully remodel any file into embeddings. So, a considerably extra real looking panorama of a RAG pipeline would look as follows:

    Picture by writer

    There are a number of parameters to additional customise the chunking course of and match it to our particular wants. A key parameter of the chunking course of is the chunk measurement, which permits us to specify what the dimensions of every chunk can be (in characters or in tokens). The trick right here is that the chunks we create should be sufficiently small to be processed inside the measurement limitations of the embedding, however on the identical time, they need to even be giant sufficient to include significant data.

    As an illustration, let’s assume we need to course of the next sentence from Warfare and Peace, the place Prince Andrew contemplates the battle:

    Picture by writer

    Let’s additionally assume we created the next (somewhat small) chunks :

    picture by writer

    Then, if we have been to ask one thing like “What does Prince Andrew imply by ‘all the identical now’?”, we could not get reply as a result of the chunk “However isn’t all of it the identical now?” thought he. doesn’t include any context and is imprecise. In distinction, the that means is scattered throughout a number of chunks. Thus, despite the fact that it’s just like the query we ask and could also be retrieved, it doesn’t include any that means to supply a related response. Subsequently, deciding on the suitable chunk measurement for the chunking course of consistent with the kind of paperwork we use for the RAG, can largely affect the standard of the responses we’ll be getting. Generally, the content material of a piece ought to make sense for a human studying it with out another data, with the intention to additionally be capable to make sense for the mannequin. Finally, a trade-off for the chunk measurement exists — chunks should be sufficiently small to satisfy the embedding mannequin’s measurement limitations, however giant sufficient to protect that means.

    • • •

    One other important parameter is the chunk overlap. That’s how a lot overlap we wish the chunks to have with each other. As an illustration, within the Warfare and Peace instance, we’d get one thing like the next chunks if we selected a piece overlap of 5 characters.

    Picture by writer

    That is additionally an important determination we’ve to make as a result of:

    • Bigger overlap means extra calls and tokens spent on embedding creation, which suggests costlier + slower
    • Smaller overlap means the next likelihood of dropping related data between the chunk boundaries

    Selecting the right chunk overlap largely is dependent upon the kind of textual content we need to course of. For instance, a recipe e-book the place the language is easy and easy likely received’t require an unique chunking methodology. On the flip facet, a traditional literature e-book like Warfare and Peace, the place language could be very advanced and that means is interconnected all through completely different paragraphs and sections, will likely require a extra considerate method to chunking to ensure that the RAG to supply significant outcomes.

    • • •

    However what if all we’d like is a less complicated RAG that appears as much as a few paperwork that match the dimensions limitations of no matter embeddings mannequin we use in only one chunk? Can we nonetheless want the chunking step, or can we simply immediately make one single embedding for the whole textual content? The brief reply is that it’s all the time higher to carry out the chunking step, even for a information base that does match the dimensions limits. That’s as a result of, because it seems, when coping with giant paperwork, we face the issue of getting lost in the middle — lacking related data that’s included in giant paperwork and respective giant embeddings.

    What are these mysterious ‘measurement limitations’?

    Generally, a request to an embedding mannequin can embody a number of chunks of textual content. There are a number of completely different sorts of limitations we’ve to think about comparatively to the dimensions of the textual content we have to create embeddings for and its processing. Every of these several types of limits takes completely different values relying on the embedding mannequin we use. Extra particularly, these are:

    • Chunk Dimension, or additionally most tokens per enter, or context window. That is the utmost measurement in tokens for every chunk. As an illustration, for OpenAI’s text-embedding-3-small embedding mannequin, the chunk size limit is 8,191 tokens. If we offer a piece that’s bigger than the chunk measurement restrict, generally, it is going to be silently truncated‼️ (an embedding goes to be created, however just for the primary half that meets the chunk measurement restrict), with out producing any error.
    • Variety of Chunks per Request, or additionally variety of inputs. There’s additionally a restrict on the variety of chunks that may be included in every request. As an illustration, all OpenAI’s embedding fashions have a restrict of two,048 inputs — that’s, a maximum of 2,048 chunks per request.
    • Complete Tokens per Request: There’s additionally a limitation on the entire variety of tokens of all chunks in a request. For all OpenAI’s fashions, the total maximum number of tokens across all chunks in a single request is 300,000 tokens.

    So, what occurs if our paperwork are greater than 300,000 tokens? As you could have imagined, the reply is that we make a number of consecutive/parallel requests of 300,000 tokens or fewer. Many Python libraries do that mechanically behind the scenes. For instance, LangChain’s OpenAIEmbeddings that I exploit in my earlier put up, mechanically batches the paperwork we offer into batches below 300,000 tokens, provided that the paperwork are already supplied in chunks.

    Studying bigger information into the RAG pipeline

    Let’s check out how all these play out in a easy Python instance, utilizing the War and Peace textual content as a doc to retrieve within the RAG. The info I’m utilizing — Leo Tolstoy’s Warfare and Peace textual content — is licensed as Public Area and will be present in Project Gutenberg.

    So, to begin with, let’s attempt to learn from the Warfare and Peace textual content with none setup for chunking. For this tutorial, you’ll have to have put in the langchain, openai, and faiss Python libraries. We will simply set up the required packages as follows:

    pip set up openai langchain langchain-community langchain-openai faiss-cpu

    After ensuring the required libraries are put in, our code for a quite simple RAG seems to be like this and works high-quality for a small and easy .txt file within the text_folder.

    from openai import OpenAI # Chat_GPT API key 
    api_key = "your key" 
    
    # initialize LLM
    llm = ChatOpenAI(openai_api_key=api_key, mannequin="gpt-4o-mini", temperature=0.3)
    
    # loading paperwork for use for RAG 
    text_folder =  "RAG information"  
    
    paperwork = []
    for filename in os.listdir(text_folder):
        if filename.decrease().endswith(".txt"):
            file_path = os.path.be a part of(text_folder, filename)
            loader = TextLoader(file_path)
            paperwork.lengthen(loader.load())
    
    # generate embeddings
    embeddings = OpenAIEmbeddings(openai_api_key=api_key)
    
    # create vector database w FAISS 
    vector_store = FAISS.from_documents(paperwork, embeddings)
    retriever = vector_store.as_retriever()
    
    
    def principal():
        print("Welcome to the RAG Assistant. Kind 'exit' to stop.n")
        
        whereas True:
            user_input = enter("You: ").strip()
            if user_input.decrease() == "exit":
                print("Exiting…")
                break
    
            # get related paperwork
            relevant_docs = retriever.invoke(user_input)
            retrieved_context = "nn".be a part of([doc.page_content for doc in relevant_docs])
    
            # system immediate
            system_prompt = (
                "You're a useful assistant. "
                "Use ONLY the next information base context to reply the person. "
                "If the reply just isn't within the context, say you do not know.nn"
                f"Context:n{retrieved_context}"
            )
    
            # messages for LLM 
            messages = [
                {"role": "system", "content": system_prompt},
                {"role": "user", "content": user_input}
            ]
    
            # generate response
            response = llm.invoke(messages)
            assistant_message = response.content material.strip()
            print(f"nAssistant: {assistant_message}n")
    
    if __name__ == "__main__":
        principal()

    However, if I add the Warfare and Peace .txt file in the identical folder, and attempt to immediately create an embedding for it, I get the next error:

    Picture by writer

    ughh 🙃

    So what occurs right here? LangChain’s OpenAIEmbeddingscan not cut up the textual content into separate, lower than 300,000 token iterations, as a result of we didn’t present it in chunks. It doesn’t cut up the chunk, which is 777,181 tokens, resulting in a request that exceeds the 300,000 tokens most per request.

    • • •

    Now, let’s attempt to arrange the chunking course of to create a number of embeddings from this massive file. To do that, I can be utilizing the text_splitter library supplied by LangChain, and extra particularly, the RecursiveCharacterTextSplitter. In RecursiveCharacterTextSplitter, the chunk measurement and chunk overlap parameters are specified as plenty of characters, however different splitters like TokenTextSplitter or OpenAITokenSplitter additionally permit to arrange these parameters as plenty of tokens.

    So, we are able to arrange an occasion of the textual content splitter as beneath:

    splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)

    … after which use it to separate our preliminary doc into chunks…

    split_docs = []
    for doc in paperwork:
        chunks = splitter.split_text(doc.page_content)
        for chunk in chunks:
            split_docs.append(Doc(page_content=chunk))

    …after which use these chunks to create the embeddings…

    paperwork= split_docs
    
    # create embeddings + FAISS index
    embeddings = OpenAIEmbeddings(openai_api_key=api_key)
    vector_store = FAISS.from_documents(paperwork, embeddings)
    retriever = vector_store.as_retriever()
    
    .....

    … and voila 🌟

    Now our code can successfully parse the supplied doc, even when it’s a bit bigger, and supply related responses.

    Picture by writer

    On my thoughts

    Selecting a chunking method that matches the dimensions and complexity of the paperwork we need to feed into our RAG pipeline is essential for the standard of the responses that we’ll be receiving. For certain, there are a number of different parameters and completely different chunking methodologies one must have in mind. Nonetheless, understanding and fine-tuning chunk measurement and overlap is the inspiration for constructing RAG pipelines that produce significant outcomes.

    • • •

    Cherished this put up? Received an attention-grabbing knowledge or AI mission? 

    Let’s be buddies! Be part of me on

    📰Substack 📝Medium 💼LinkedIn ☕Buy me a coffee!

    • • •



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticlePart 1: SEAL — Giving LLMs the Power to Learn by Themselves | by W R VARUN. | Jul, 2025
    Next Article Why Waiting for Monthly Financial Reports Is Creating Blind Spots and Slowing Your Growth
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Candy AI NSFW AI Video Generator: My Unfiltered Thoughts

    August 2, 2025
    Artificial Intelligence

    Starting Your First AI Stock Trading Bot

    August 2, 2025
    Artificial Intelligence

    When Models Stop Listening: How Feature Collapse Quietly Erodes Machine Learning Systems

    August 2, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Boost Team Productivity and Security With Windows 11 Pro, Now $15 for Life

    August 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Why Leaders Who Hide Behind Ambiguity Are Failing Their Teams

    May 30, 2025

    🌐 A Technology-Agnostic AI Lifecycle Framework: From Ideas to Impact with Integrity | by Bobby | Jun, 2025

    June 3, 2025

    Customers furious after Game cancels Nintendo Switch 2 pre-orders

    May 29, 2025
    Our Picks

    Boost Team Productivity and Security With Windows 11 Pro, Now $15 for Life

    August 2, 2025

    10 Common SQL Patterns That Show Up in FAANG Interviews | by Rohan Dutt | Aug, 2025

    August 2, 2025

    This Mac and Microsoft Bundle Pays for Itself in Productivity

    August 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.