Close Menu
    Trending
    • I Tested Candy AI Unfiltered Chat for 1 Month
    • Jacobian and Hessian Intuition: Why Deep Learning Needs Higher-Order Calculus | by Thinking Loop | Aug, 2025
    • How I Built a Profitable AI Startup Solo — And the 6 Mistakes I’d Never Make Again
    • From Data Scientist IC to Manager: One Year In
    • Highly Efficient Multi-Node Communication Patterns Using oneCCL | by This Technology Life | Aug, 2025
    • SNIA Announces Storage.AI – insideAI News
    • How to Make Gen Z Actually Open Your Emails — And Become Loyal Customers
    • AI Optimization Tool for Smarter, Future-Ready Websites
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Agentic RAG Applications: Company Knowledge Slack Agents
    Artificial Intelligence

    Agentic RAG Applications: Company Knowledge Slack Agents

    Team_AIBS NewsBy Team_AIBS NewsMay 31, 2025No Comments19 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I that the majority firms would have constructed or carried out their very own Rag brokers by now.

    An AI information agent can dig by inside documentation — web sites, PDFs, random docs — and reply workers in Slack (or Groups/Discord) inside a couple of seconds. So, these bots ought to considerably cut back time sifting by info for workers.

    I’ve seen a couple of of those in greater tech firms, like AskHR from IBM, however they aren’t all that mainstream but.

    Should you’re eager to grasp how they’re constructed and the way a lot sources it takes to construct a easy one, that is an article for you.

    Elements this text will undergo | Picture by writer

    I’ll undergo the instruments, methods, and structure concerned, whereas additionally trying on the economics of constructing one thing like this. I’ll additionally embrace a bit on what you’ll find yourself focusing probably the most on.

    Stuff you’ll spend time on | Picture by writer

    There may be additionally a demo on the finish for what this may seem like in Slack.

    Should you’re already acquainted with RAG, be at liberty to skip the subsequent part — it’s only a little bit of repetitive stuff round brokers and RAG.

    What’s RAG and Agentic RAG?

    Most of you who learn this may know what Retrieval-Augmented Era (RAG) is however when you’re new to it, it’s a approach to fetch info that will get fed into the big language mannequin (LLM) earlier than it solutions the consumer’s query.

    This enables us to supply related info from varied paperwork to the bot in actual time so it may possibly reply the consumer accurately.

    Easy RAG | Picture by writer

    This retrieval system is doing greater than easy key phrase search, because it finds comparable matches reasonably than simply actual ones. For instance, if somebody asks about fonts, a similarity search would possibly return paperwork on typography.

    Many would say that RAG is a reasonably easy idea to grasp, however the way you retailer info, the way you fetch it, and what sort of embedding fashions you utilize nonetheless matter loads.

    Should you’re eager to study extra about embeddings and retrieval, I’ve written about this here.

    At present, individuals have gone additional and primarily work with agent programs.

    In agent programs, the LLM can resolve the place and the way it ought to fetch info, reasonably than simply having content material dumped into its context earlier than producing a response.

    Agent system with RAG instruments — the yellow dot i the agent and the grey dots are the instruments | Picture by writer

    It’s essential to do not forget that simply because extra superior instruments exist doesn’t imply you need to all the time use them. You wish to preserve the system intuitive and likewise preserve API calls to a minimal.

    With agent programs the API calls will enhance, because it must no less than name one device after which make one other name to generate a response.

    That mentioned, I actually just like the consumer expertise of the bot “going someplace” — to a device — to look one thing up. Seeing that stream in Slack helps the consumer perceive what’s taking place.

    However going with an agent or utilizing a full framework isn’t essentially the higher selection. I’ll elaborate on this as we proceed.

    Technical Stack

    There’s a ton of choices for agent frameworks, vector databases, and deployment choices, so I’ll undergo some.

    For the deployment choice, since we’re working with Slack webhooks, we’re coping with event-driven structure the place the code solely runs when there’s a query in Slack.

    To maintain prices to a minimal, we are able to use serverless features. The selection is both going with AWS Lambda or choosing a brand new vendor.

    Lambda vs Modal comparability, discover the total desk here | Picture by writer

    Platforms like Modal are technically constructed to serve LLM fashions, however they work effectively for long-running ETL processes, and for LLM apps generally.

    Modal hasn’t been battle-tested as a lot, and also you’ll discover that when it comes to latency, however it’s very easy and affords tremendous low cost CPU pricing.

    I ought to notice although that when setting this up with Modal on the free tier, I’ve had a couple of 500 errors, however that may be anticipated.

    As for learn how to decide the agent framework, that is fully non-compulsory. I did a comparability piece a couple of weeks in the past on open-source agentic frameworks that you’ll find here, and the one I disregarded was LlamaIndex.

    So I made a decision to offer it a attempt right here.

    The very last thing that you must decide is a vector database, or a database that helps vector search. That is the place we retailer the embeddings and different metadata, so we are able to carry out similarity search when a consumer’s question is available in.

    There are a whole lot of choices on the market, however I believe those with the best potential are Weaviate, Milvus, pgvector, Redis, and Qdrant.

    Vector DBs comparability, discover the total desk here | Picture by writer

    Each Qdrant and Milvus have fairly beneficiant free tiers for his or her cloud choices. Qdrant, I do know, permits us to retailer each dense and sparse vectors. Llamaindex, together with most agent frameworks, assist many various vector databases so any can work.

    I’ll attempt Milvus extra sooner or later to match efficiency and latency, however for now, Qdrant works effectively.

    Redis is a strong decide too, or actually any vector extension of your present database.

    Value & time to construct

    When it comes to time and value, it’s important to account for engineering hours, cloud, embedding, and huge language mannequin (LLM) prices.

    It doesn’t take that a lot time as well up a framework to run one thing minimal. What takes time is connecting the content material correctly, prompting the system, parsing the outputs, and ensuring it runs quick sufficient.

    But when we flip to overhead prices, cloud prices to run the agent system is minimal for only one bot for one firm utilizing serverless features as you noticed within the desk within the final part.

    Nonetheless, for the vector databases, it would get costlier the extra knowledge you retailer.

    Each Zilliz and Qdrant Cloud has an excellent quantity of free tier to your first 1 to 5GBs of knowledge, so except you transcend a couple of thousand chunks you might not pay for something.

    Vector DBs comparability for prices, discover the total desk here | Picture by writer

    You’ll begin paying although when you transcend the hundreds mark, with Weaviate being the most costly of the distributors above.

    As for the embeddings, these are typically very low cost.

    You’ll be able to see a desk beneath on utilizing OpenAI’s text-embedding-3-small with chunks of various sizes when you embed 1 to 10 million texts.

    Embedding prices per chunk examples — discover the total desk here | Picture by writer

    When individuals begin optimizing for embeddings and storage, they’ve often moved past embedding thousands and thousands of texts.

    The one factor that issues probably the most although is what giant language mannequin (LLM) you utilize. You have to take into consideration API costs, since an agent system will usually name an LLM two to 4 instances per run.

    Instance costs for LLMs in agent programs, full desk here | Picture by writer

    For this technique, I’m utilizing GPT-4o-mini or Gemini Flash 2.0, that are the most cost effective choices.

    So let’s say an organization is utilizing the bot a couple of hundred instances per day and every run prices us 2–4 API calls, we would find yourself at round much less of a greenback per day and round $10–50 {dollars} per thirty days.

    You’ll be able to see that switching to a costlier mannequin would enhance the month-to-month invoice by 10x to 100x. Utilizing ChatGPT is usually backed at no cost customers, however while you construct your individual functions you’ll be financing it.

    There can be smarter and cheaper fashions sooner or later, so no matter you construct now will possible enhance over time. However begin small, as a result of prices add up and for easy programs like this you don’t want them to be distinctive.

    The subsequent part will get into learn how to construct this technique.

    The structure (processing paperwork)

    The system has two elements. The primary is how we break up up paperwork — what we name chunking — and embed them. This primary half is essential, as it would dictate how the agent solutions later.

    Splitting up paperwork to totally different chunks hooked up with metadata | Picture by writer

    So, to be sure to’re making ready all of the sources correctly, that you must consider carefully about learn how to chunk them.

    Should you take a look at the doc above, you possibly can see that we are able to miss context if we break up the doc primarily based on headings but in addition on the variety of characters the place the paragraphs hooked up to the primary heading is break up up for being too lengthy.

    Dropping context in chunks | Picture by writer

    You have to be good about guaranteeing every chunk has sufficient context (however not an excessive amount of). You additionally want to verify the chunk is hooked up to metadata so it’s straightforward to hint again to the place it was discovered.

    Setting metadata to the sources to hint again to the place the chunks have been discovered | Picture by writer

    That is the place you’ll spend probably the most time, and actually, I believe there must be higher instruments on the market to do that intelligently.

    I ended up utilizing Docling for PDFs, constructing it out to connect parts primarily based on headings and paragraph sizes. For internet pages, I constructed a crawler that seemed over web page parts to resolve whether or not to chunk primarily based on anchor tags, headings, or basic content material.

    Bear in mind, if the bot is meant to quote sources, every chunk must be hooked up to URLs, anchor tags, web page numbers, block IDs, permalinks so the system can find the knowledge accurately getting used.

    Since a lot of the content material you’re working with is scattered and infrequently low high quality, I additionally determined to summarize texts utilizing an LLM. These summaries got totally different labels with larger authority, which meant they have been prioritized throughout retrieval.

    Summarizing docs with larger authority | Picture by writer

    There may be additionally the choice to push within the summaries in their very own instruments, and preserve deep dive info separate. Letting the agent resolve which one to make use of however it would look unusual to customers because it’s not intuitive conduct.

    Nonetheless, I’ve to emphasize that if the standard of the supply info is poor, it’s exhausting to make the system work effectively.

    For instance, if a consumer asks how an API request must be made and there are 4 totally different internet pages giving totally different solutions, the bot gained’t know which one is most related.

    To demo this, I needed to do some guide assessment. I additionally had AI do deeper analysis across the firm to assist fill in gaps, after which I embedded that too.

    Sooner or later, I believe I’ll construct one thing higher for doc ingestion — most likely with the assistance of a language mannequin.

    The structure (the agent)

    For the second half, the place we hook up with this knowledge, we have to construct a system the place an agent can hook up with totally different instruments that include totally different quantities of knowledge from our vector database.

    We preserve to 1 agent solely to make it straightforward sufficient to regulate. This one agent can resolve what info it wants primarily based on the consumer’s query.

    The agent system | Picture by writer

    It’s good to not complicate issues and construct it out to make use of too many brokers, otherwise you’ll run into points, particularly with these smaller fashions.

    Though this may increasingly go towards my very own suggestions, I did arrange a primary LLM operate that decides if we have to run the agent in any respect.

    First preliminary LLM name to resolve on the bigger agent | Picture by writer

    This was primarily for the consumer expertise, because it takes a couple of further seconds as well up the agent (even when beginning it as a background activity when the container begins).

    As for learn how to construct the agent itself, that is straightforward, as LlamaIndex does a lot of the work for us. For this, you should utilize the FunctionAgent, passing in numerous instruments when setting it up.

    # Solely runs if the primary LLM thinks it's vital
    
    access_links_tool = get_access_links_tool()
    public_docs_tool = get_public_docs_tool()
    onboarding_tool = get_onboarding_information_tool()
    general_info_tool = get_general_info_tool()
        
    formatted_system_prompt = get_system_prompt(team_name)
        
    agent = FunctionAgent(
      instruments=[onboarding_tool, public_docs_tool, access_links_tool, general_info_tool],
      llm=global_llm,
      system_prompt=formatted_system_prompt
    )

    The instruments have entry to totally different knowledge from the vector database, and they’re wrappers across the CitationQueryEngine. This engine helps to quote the supply nodes within the textual content. We will entry the supply nodes on the finish of the agent run, which you’ll connect to the message and within the footer.

    To ensure the consumer expertise is nice, you possibly can faucet into the occasion stream to ship updates again to Slack.

    handler = agent.run(user_msg=full_msg, ctx=ctx, reminiscence=reminiscence)
    
    async for occasion in handler.stream_events():
      if isinstance(occasion, ToolCall):
         display_tool_name = format_tool_name(occasion.tool_name)
         message = f"✅ Checking {display_tool_name}"
         post_thinking(message)
      if isinstance(occasion, ToolCallResult):
         post_thinking(f"✅ Carried out checking...")
    
    final_output = await handler  
    final_text = final_output
    blocks = build_slack_blocks(final_text, point out)
    
    post_to_slack(
      channel_id=channel_id, 
      blocks=blocks,
      timestamp=initial_message_ts,
      consumer=consumer 
    )

    Be sure to format the messages and Slack blocks effectively, and refine the system immediate for the agent so it codecs the messages accurately primarily based on the knowledge that the instruments will return.

    The structure must be straightforward sufficient to grasp, however there are nonetheless some retrieval methods we should always dig into.

    Methods you possibly can attempt

    Lots of people will emphasize sure methods when constructing RAG programs, they usually’re partially proper. You need to use hybrid search together with some sort of re-ranking.

    How the question instruments work beneath the hood — a bit simplified | Picture by writer

    The primary I’ll point out is hybrid search once we carry out retrieval.

    I discussed that we use semantic similarity to fetch chunks of knowledge within the varied instruments, however you additionally must account for circumstances the place actual key phrase search is required.

    Simply think about a consumer asking for a particular certificates identify, like CAT-00568. In that case, the system wants to search out actual matches simply as a lot as fuzzy ones.

    With hybrid search, supported by each Qdrant and LlamaIndex, we use each dense and sparse vectors.

    # when organising the vector retailer (each for embedding and fetching)
    vector_store = QdrantVectorStore(
       consumer=consumer,
       aclient=async_client,
       collection_name="knowledge_bases",
       enable_hybrid=True,
       fastembed_sparse_model="Qdrant/bm25"
     )

    Sparse is ideal for actual key phrases however blind to synonyms, whereas dense is nice for “fuzzy” matches (“advantages coverage” matches “worker perks”) however they will miss literal strings like CAT-00568.

    As soon as the outcomes are fetched, it’s helpful to use deduplication and re-ranking to filter out irrelevant chunks earlier than sending them to the LLM for quotation and synthesis.

    reranker = LLMRerank(llm=OpenAI(mannequin="gpt-3.5-turbo"), top_n=5)
    dedup = SimilarityPostprocessor(similarity_cutoff=0.9)
    
    engine = CitationQueryEngine(
        retriever=retriever,
        node_postprocessors=[dedup, reranker],
        metadata_mode=MetadataMode.ALL,
    )

    This half wouldn’t be vital in case your knowledge have been exceptionally clear, which is why it shouldn’t be your principal focus. It provides overhead and one other API name.

    It’s additionally not vital to make use of a big mannequin for re-ranking, however you’ll want to perform a little research by yourself to determine your choices.

    These methods are straightforward to grasp and fast to arrange, in order that they aren’t the place you’ll spend most of your time.

    What you’ll really spend time on

    Many of the stuff you’ll spend time on aren’t so attractive. It’s prompting, lowering latency, and chunking paperwork accurately.

    Earlier than you begin, you need to look into totally different immediate templates from varied frameworks to see how they immediate the fashions. You’ll spend fairly a little bit of time ensuring the system immediate is well-crafted for the LLM you select.

    The second factor you’ll spend most of your time on is making it fast. I’ve seemed into inside instruments from tech firms constructing AI information brokers and located they often reply in about 8 to 13 seconds.

    So, you want one thing in that vary.

    Utilizing a serverless supplier could be a downside right here due to chilly begins. LLM suppliers additionally introduce their very own latency, which is tough to regulate.

    One or two lagging API calls drags down the whole system | Picture by writer

    That mentioned, you possibly can look into spinning up sources earlier than they’re used, switching to lower-latency fashions, skipping frameworks to scale back overhead, and usually reducing the variety of API calls per run.

    The very last thing, which takes an enormous quantity of labor and which I’ve talked about earlier than, is chunking paperwork.

    Should you had exceptionally clear knowledge with clear headers and separations, this half can be straightforward. However extra typically, you’ll be coping with poorly structured HTML, PDFs, uncooked textual content information, Notion boards, and Confluence notes — typically scattered and formatted inconsistently.

    The problem is determining learn how to programmatically ingest these paperwork so the system will get the total info wanted to reply a query.

    Simply working with PDFs, for instance, you’ll must extract tables and pictures correctly, separate sections by web page numbers or format parts, and hint every supply again to the right web page.

    You need sufficient context, however not chunks which are too giant, or will probably be tougher to retrieve the appropriate data later.

    This sort of stuff isn’t effectively generalized. You’ll be able to’t simply push it in and count on the system to grasp it — it’s important to suppose it by earlier than you construct it.

    construct it out additional

    At this level, it really works effectively for what it’s presupposed to do, however there are a couple of items I ought to cowl (or individuals will suppose I’m simplifying an excessive amount of). You’ll wish to implement caching, a approach to replace the info, and long-term reminiscence.

    Caching isn’t important, however you possibly can no less than cache the question’s embedding in bigger programs to hurry up retrieval, and retailer current supply outcomes for follow-up questions. I don’t suppose LlamaIndex helps a lot right here, however you need to be capable of intercept the QueryTool by yourself.

    You’ll additionally desire a approach to constantly replace info within the vector databases. That is the largest headache — it’s exhausting to know when one thing has modified, so that you want some sort of change-detection methodology together with an ID for every chunk.

    You could possibly simply use periodic re-embedding methods the place you replace a piece with totally different meta tags altogether (that is my most well-liked strategy as a result of I’m lazy).

    The very last thing I wish to point out is long-term reminiscence for the agent, so it may possibly perceive conversations you’ve had previously. For that, I’ve carried out some state by fetching historical past from the Slack API. This lets the agent see round 3–6 earlier messages when responding.

    We don’t wish to push in an excessive amount of historical past, because the context window grows — which not solely will increase value but in addition tends to confuse the agent.

    That mentioned, there are higher methods to deal with long-term reminiscence utilizing exterior instruments. I’m eager to jot down extra on that sooner or later.

    Learnings and so forth

    After doing this now for a bit I’ve a couple of notes to share about working with frameworks and maintaining it easy (that I personally don’t all the time observe).

    You study loads from utilizing a framework, particularly learn how to immediate effectively and learn how to construction the code. However sooner or later, working across the framework provides overhead.

    As an illustration, on this system, I’m bypassing the framework a bit by including an preliminary API name that decides whether or not to maneuver on to the agent and responds to the consumer rapidly.

    If I had constructed this with no framework, I believe I might have dealt with that sort of logic higher the place the primary mannequin decides what device to name instantly.

    LLM API calls within the system | Picture by writer

    I haven’t tried this however I’m assuming this is able to be cleaner.

    Additionally, LlamaIndex optimizes the consumer question, which it ought to, earlier than retrieval.

    However generally it reduces the question an excessive amount of, and I must go in and repair it. The quotation synthesizer doesn’t have entry to the dialog historical past, so with that overly simplified question, it doesn’t all the time reply effectively.

    The abstractions can generally trigger the system to lose context | Picture by writer

    With a framework, it’s additionally exhausting to hint the place latency is coming from within the workflow since you possibly can’t all the time see all the things, even with statement instruments.

    Most builders suggest utilizing frameworks for fast prototyping or bootstrapping, then rewriting the core logic with direct calls in manufacturing.

    It’s not as a result of the frameworks aren’t helpful, however as a result of sooner or later it’s higher to jot down one thing you totally perceive that solely does what you want.

    The final suggestion is to maintain issues so simple as attainable and reduce LLM calls (which I’m not even totally doing myself right here).

    But when all you want is RAG and never an agent, follow that.

    You’ll be able to create a easy LLM name that units the appropriate parameters within the vector DB. From the consumer’s viewpoint, it’ll nonetheless seem like the system is “trying into the database” and returning related data.

    Should you’re taking place the identical path, I hope this was helpful.

    There may be bit extra to it although. You’ll wish to implement some sort of analysis, guardrails, and monitoring (I’ve used Phoenix right here).

    As soon as completed although, the consequence will seem like this:

    Instance in firm agent trying by PDFs, web sites docs in Slack | Picture by writer

    Should you to observe my writing, you’ll find me right here, on my website, or on LinkedIn.

    I’ll attempt to dive deeper into agentic reminiscence, evals, and prompting over the summer season.

    ❤



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDecoding Faces: How Expression Recognition is Quietly Shaping Human-AI Interaction | by Everton Gomede, PhD | May, 2025
    Next Article Turn Your Side Hustle Into a 7-Figure Business With These 4 AI Growth Hacks
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    I Tested Candy AI Unfiltered Chat for 1 Month

    August 5, 2025
    Artificial Intelligence

    From Data Scientist IC to Manager: One Year In

    August 4, 2025
    Artificial Intelligence

    AI Optimization Tool for Smarter, Future-Ready Websites

    August 4, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    I Tested Candy AI Unfiltered Chat for 1 Month

    August 5, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Design Patterns with Python for Machine Learning Engineers: Template Method | by Marcello Politi | Dec, 2024

    December 25, 2024

    How to Level Up Your Technical Skills in This AI Era

    April 29, 2025

    How to Keep Your Brand Alive in the Age of ChatGPT and AI Search

    April 24, 2025
    Our Picks

    I Tested Candy AI Unfiltered Chat for 1 Month

    August 5, 2025

    Jacobian and Hessian Intuition: Why Deep Learning Needs Higher-Order Calculus | by Thinking Loop | Aug, 2025

    August 5, 2025

    How I Built a Profitable AI Startup Solo — And the 6 Mistakes I’d Never Make Again

    August 5, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.