Close Menu
    Trending
    • How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1
    • From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    • Cuba’s Energy Crisis: A Systemic Breakdown
    • AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Evaluation-Driven Development for agentic applications using PydanticAI | by Lak Lakshmanan | Dec, 2024
    Artificial Intelligence

    Evaluation-Driven Development for agentic applications using PydanticAI | by Lak Lakshmanan | Dec, 2024

    Team_AIBS NewsBy Team_AIBS NewsDecember 21, 2024No Comments14 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    An open-source, model-agnostic agentic framework that helps dependency injection

    Towards Data Science

    Ideally, you’ll be able to consider agentic functions whilst you might be creating them, as an alternative of analysis being an afterthought. For this to work, although, you want to have the ability to mock each inside and exterior dependencies of the agent you might be creating. I’m extraordinarily excited by PydanticAI as a result of it helps dependency injection from the bottom up. It’s the first framework that has allowed me to construct agentic functions in an evaluation-driven method.

    Picture of Krakow Fabric Corridor, generated utilizing Google Imagen by the creator. This constructing was in-built phases over the centuries, with enhancements primarily based on the place the present constructing was falling brief. Analysis-driven improvement, in different phrases.

    On this article, I’ll discuss concerning the core challenges and show creating a easy agent in an evaluation-driven approach utilizing PydanticAI.

    Challenges when creating GenAI functions

    Like many GenAI builders, I’ve been ready for an agentic framework that helps the complete improvement lifecycle. Every time a brand new framework comes alongside, I attempt it out hoping that this would be the One — see, for instance, my articles about DSPy, Langchain, LangGraph, and Autogen.

    I discover that there are core challenges {that a} software program developer faces when creating an LLM-based utility. These challenges are sometimes not blockers if you’re constructing a easy PoC with GenAI, however they’ll come to chunk you if you’re constructing LLM-powered functions in manufacturing.

    What challenges?

    (1) Non-determinism: Not like most software program APIs, calls to an LLM with the very same enter might return totally different outputs every time. How do you even start to check such an utility?

    (2) LLM limitations: Foundational fashions like GPT-4, Claude, and Gemini are restricted by their coaching information (e.g., no entry to enterprise confidential data), functionality (e.g., you can’t invoke enterprise APIs and databases), and can’t plan/motive.

    (3) LLM flexibility: Even in the event you resolve to stay to LLMs from a single supplier comparable to Anthropic, you might discover that you just want a special LLM for every step — maybe one step of your workflow wants a low-latency small language mannequin (Haiku), one other requires nice code-generation functionality (Sonnet), and a 3rd step requires wonderful contextual consciousness (Opus).

    (4) Fee of Change: GenAI applied sciences are shifting quick. Not too long ago, lots of the enhancements have come about in foundational mannequin capabilities. Not are the foundational fashions simply producing textual content primarily based on person prompts. They’re now multimodal, can generate structured outputs, and may have reminiscence. But, in the event you attempt to construct in an LLM-agnostic approach, you typically lose the low-level API entry that may activate these options.

    To assist tackle the primary downside, of non-determinism, your software program testing wants to include an analysis framework. You’ll by no means have software program that works 100%; as an alternative, you’ll need to have the ability to design round software program that’s x% appropriate, construct guardrails and human oversight to catch the exceptions, and monitor the system in real-time to catch regressions. Key to this functionality is evaluation-driven improvement (my time period), an extension of test-driven improvement in software program.

    Analysis-driven improvement. sketch by creator.

    The present workaround for all of the LLM limitations in Problem #2 is to make use of agentic architectures like RAG, present the LLM entry to instruments, and make use of patterns like Reflection, ReACT and Chain of Thought. So, your framework might want to have the power to orchestrate brokers. Nevertheless, evaluating brokers that may name exterior instruments is tough. You want to have the ability to inject proxies for these exterior dependencies as a way to take a look at them individually, and consider as you construct.

    To deal with problem #3, an agent wants to have the ability to invoke the capabilities of various kinds of foundational fashions. Your agent framework must be LLM-agnostic on the granularity of a single step of an agentic workflow. To deal with the speed of change consideration (problem #4), you wish to retain the power to make low-level entry to the foundational mannequin APIs and to strip out sections of your codebase which are not obligatory.

    Is there a framework that meets all these standards? For the longest time, the reply was no. The closest I might get was to make use of Langchain, pytest’s dependency injection, and deepeval with one thing like this (full instance is here):

    from unittest.mock import patch, Mock
    from deepeval.metrics import GEval

    llm_as_judge = GEval(
    title="Correctness",
    standards="Decide whether or not the precise output is factually appropriate primarily based on the anticipated output.",
    evaluation_params=[LLMTestCaseParams.INPUT, LLMTestCaseParams.ACTUAL_OUTPUT],
    mannequin='gpt-3.5-turbo'
    )

    @patch('lg_weather_agent.retrieve_weather_data', Mock(return_value=chicago_weather))
    def eval_query_rain_today():
    input_query = "Is it raining in Chicago?"
    expected_output = "No, it isn't raining in Chicago proper now."
    consequence = lg_weather_agent.run_query(app, input_query)
    actual_output = consequence[-1]

    print(f"Precise: {actual_output} Anticipated: {expected_output}")
    test_case = LLMTestCase(
    enter=input_query,
    actual_output=actual_output,
    expected_output=expected_output
    )

    llm_as_judge.measure(test_case)
    print(llm_as_judge.rating)

    Basically, I’d assemble a Mock object (chicago_weather within the above instance) for each LLM name and patch the decision to the LLM (retrieve_weather_data within the above instance) with the hardcoded object every time I wanted to mock that a part of the agentic workflow. The dependency injection is all over, you want a bunch of hardcoded objects, and the calling workflow turns into extraordinarily exhausting to comply with. Observe that in the event you don’t have dependency injection, there is no such thing as a approach to take a look at a operate like this: clearly, the exterior service will return the present climate and there’s no approach to decide what the right reply is for a query comparable to whether or not or not it’s raining proper now.

    So … is there an agent framework that helps dependency injection, is Pythonic, supplies low-level entry to LLMs, is model-agnostic, helps constructing it one eval-at-a-time, and is simple to make use of and comply with?

    Nearly. PydanticAI meets the primary 3 necessities; the fourth (low-level LLM entry) isn’t potential, however the design doesn’t preclude it. In the remainder of this text, I’ll present you methods to use it to develop an agentic utility in an evaluation-driven approach.

    1. Your first PydanticAI Utility

    Let’s begin out by constructing a easy PydanticAI utility. It will use an LLM to reply questions on mountains:

        agent = llm_utils.agent()
    query = "What's the tallest mountain in British Columbia?"
    print(">> ", query)
    reply = agent.run_sync(query)
    print(reply.information)

    Within the code above, I’m creating an agent (I’ll present you the way, shortly) after which calling run_sync passing within the person immediate, and getting again the LLM’s response. run_sync is a approach to have the agent invoke the LLM and await the response. Different methods are to run the question asynchronously, or to stream its response. (Full code is right here if you wish to comply with alongside).

    Run the code above, and you’re going to get one thing like:

    >>  What's the tallest mountain in British Columbia?
    The tallest mountain in British Columbia is **Mount Robson**, at 3,954 metres (12,972 ft).

    To create the agent, create a mannequin after which inform the agent to make use of that Mannequin for all its steps.

    import pydantic_ai
    from pydantic_ai.fashions.gemini import GeminiModel

    def default_model() -> pydantic_ai.fashions.Mannequin:
    mannequin = GeminiModel('gemini-1.5-flash', api_key=os.getenv('GOOGLE_API_KEY'))
    return mannequin

    def agent() -> pydantic_ai.Agent:
    return pydantic_ai.Agent(default_model())

    The concept behind default_model() is to make use of a comparatively cheap however quick mannequin like Gemini Flash because the default. You possibly can then change the mannequin utilized in particular steps as obligatory by passing in a special mannequin to run_sync()

    PydanticAI mannequin help looks sparse, however probably the most generally used fashions — the present frontier ones from OpenAI, Groq, Gemini, Mistral, Ollama, and Anthropic — are all supported. By means of Ollama, you may get entry to Llama3, Starcoder2, Gemma2, and Phi3. Nothing vital appears to be lacking.

    2. Pydantic with structured outputs

    The instance within the earlier part returned free-form textual content. In most agentic workflows, you’ll need the LLM to return structured information as a way to use it immediately in packages.

    Contemplating that this API is from Pydantic, returning structured output is sort of simple. Simply outline the specified output as a dataclass (full code is here):

    from dataclasses import dataclass

    @dataclass
    class Mountain:
    title: str
    location: str
    top: float

    While you create the Agent, inform it the specified output kind:

    agent = Agent(llm_utils.default_model(),
    result_type=Mountain,
    system_prompt=(
    "You're a mountaineering information, who supplies correct data to most of the people.",
    "Present all distances and heights in meters",
    "Present location as distance and route from nearest large metropolis",
    ))

    Observe additionally using the system immediate to specify models and so on.

    Operating this on three questions, we get:

    >>  Inform me concerning the tallest mountain in British Columbia?
    Mountain(title='Mount Robson', location='130km North of Vancouver', top=3999.0)
    >> Is Mt. Hood straightforward to climb?
    Mountain(title='Mt. Hood', location='60 km east of Portland', top=3429.0)
    >> What is the tallest peak within the Enchantments?
    Mountain(title='Mount Stuart', location='100 km east of Seattle', top=3000.0)

    However how good is that this agent? Is the peak of Mt. Robson appropriate? Is Mt. Stuart actually the tallest peak within the Enchantments? All of this data might have been hallucinated!

    There is no such thing as a approach so that you can know the way good an agentic utility is until you consider the agent in opposition to reference solutions. You cannot simply “eyeball it”. Sadly, that is the place plenty of LLM frameworks fall brief — they make it actually exhausting to judge as you develop the LLM utility.

    3. Consider in opposition to reference solutions

    It’s if you begin to consider in opposition to reference solutions that PydanticAI begins to indicate its strengths. Every part is sort of Pythonic, so you’ll be able to construct customized analysis metrics fairly merely.

    For instance, that is how we’ll consider a returned Mountain object on three standards and create a composite rating (full code is right here):

    def consider(reply: Mountain, reference_answer: Mountain) -> Tuple[float, str]:
    rating = 0
    motive = []
    if reference_answer.title in reply.title:
    rating += 0.5
    motive.append("Appropriate mountain recognized")
    if reference_answer.location in reply.location:
    rating += 0.25
    motive.append("Appropriate metropolis recognized")
    height_error = abs(reference_answer.top - reply.top)
    if height_error < 10:
    rating += 0.25 * (10 - height_error)/10.0
    motive.append(f"Peak was {height_error}m off. Appropriate reply is {reference_answer.top}")
    else:
    motive.append(f"Fallacious mountain recognized. Appropriate reply is {reference_answer.title}")

    return rating, ';'.be a part of(motive)

    Now, we will run this on a dataset of questions and reference solutions:

        questions = [
    "Tell me about the tallest mountain in British Columbia?",
    "Is Mt. Hood easy to climb?",
    "What's the tallest peak in the Enchantments?"
    ]

    reference_answers = [
    Mountain("Robson", "Vancouver", 3954),
    Mountain("Hood", "Portland", 3429),
    Mountain("Dragontail", "Seattle", 2690)
    ]

    total_score = 0
    for l_question, l_reference_answer in zip(questions, reference_answers):
    print(">> ", l_question)
    l_answer = agent.run_sync(l_question)
    print(l_answer.information)
    l_score, l_reason = consider(l_answer.information, l_reference_answer)
    print(l_score, ":", l_reason)
    total_score += l_score

    avg_score = total_score / len(questions)

    Operating this, we get:

    >>  Inform me concerning the tallest mountain in British Columbia?
    Mountain(title='Mount Robson', location='130 km North-East of Vancouver', top=3999.0)
    0.75 : Appropriate mountain recognized;Appropriate metropolis recognized;Peak was 45.0m off. Appropriate reply is 3954
    >> Is Mt. Hood straightforward to climb?
    Mountain(title='Mt. Hood', location='60 km east of Portland, OR', top=3429.0)
    1.0 : Appropriate mountain recognized;Appropriate metropolis recognized;Peak was 0.0m off. Appropriate reply is 3429
    >> What is the tallest peak within the Enchantments?
    Mountain(title='Dragontail Peak', location='14 km east of Leavenworth, WA', top=3008.0)
    0.5 : Appropriate mountain recognized;Peak was 318.0m off. Appropriate reply is 2690
    Common rating: 0.75

    Mt. Robson’s top is 45m off; Dragontail peak’s top was 318m off. How would you repair this?

    That’s proper. You’d use a RAG structure or arm the agent with a software that gives the right top data. Let’s use the latter method and see methods to do it with Pydantic.

    Observe how evaluation-driven improvement reveals us the trail ahead to enhance our agentic utility.

    4a. Utilizing a software

    PydanticAI helps a number of methods to supply instruments to an agent. Right here, I annotate a operate to be referred to as every time it wants the peak of a mountain (full code here):

       agent = Agent(llm_utils.default_model(),
    result_type=Mountain,
    system_prompt=(
    "You're a mountaineering information, who supplies correct data to most of the people.",
    "Use the supplied software to lookup the elevation of many mountains."
    "Present all distances and heights in meters",
    "Present location as distance and route from nearest large metropolis",
    ))
    @agent.software
    def get_height_of_mountain(ctx: RunContext[Tools], mountain_name: str) -> str:
    return ctx.deps.elev_wiki.snippet(mountain_name)

    The operate, although, does one thing unusual. It pulls an object referred to as elev_wiki out of the run-time context of the agent. This object is handed in once we name run_sync:

    class Instruments:
    elev_wiki: wikipedia_tool.WikipediaContent
    def __init__(self):
    self.elev_wiki = OnlineWikipediaContent("Listing of mountains by elevation")

    instruments = Instruments() # Instruments or FakeTools

    l_answer = agent.run_sync(l_question, deps=instruments) # be aware how we're in a position to inject

    As a result of the Runtime context will be handed into each agent invocation or software name , we will use it to do dependency injection in PydanticAI. You’ll see this within the subsequent part.

    The wiki itself simply queries Wikipedia on-line (code here) and extracts the contents of the web page and passes the suitable mountain data to the agent:

    import wikipedia

    class OnlineWikipediaContent(WikipediaContent):
    def __init__(self, subject: str):
    print(f"Will question on-line Wikipedia for data on {subject}")
    self.web page = wikipedia.web page(subject)

    def url(self) -> str:
    return self.web page.url

    def html(self) -> str:
    return self.web page.html()

    Certainly, once we run it, we get appropriate heights now:

    Will question on-line Wikipedia for data on Listing of mountains by elevation
    >> Inform me concerning the tallest mountain in British Columbia?
    Mountain(title='Mount Robson', location='100 km west of Jasper', top=3954.0)
    0.75 : Appropriate mountain recognized;Peak was 0.0m off. Appropriate reply is 3954
    >> Is Mt. Hood straightforward to climb?
    Mountain(title='Mt. Hood', location='50 km ESE of Portland, OR', top=3429.0)
    1.0 : Appropriate mountain recognized;Appropriate metropolis recognized;Peak was 0.0m off. Appropriate reply is 3429
    >> What is the tallest peak within the Enchantments?
    Mountain(title='Mount Stuart', location='Cascades, Washington, US', top=2869.0)
    0 : Fallacious mountain recognized. Appropriate reply is Dragontail
    Common rating: 0.58

    4b. Dependency injecting a mock service

    Ready for the API name to Wikipedia every time throughout improvement or testing is a foul thought. As an alternative, we’ll wish to mock the Wikipedia response in order that we will develop rapidly and be assured of the consequence we’re going to get.

    Doing that may be very easy. We create a Pretend counterpart to the Wikipedia service:

    class FakeWikipediaContent(WikipediaContent):
    def __init__(self, subject: str):
    if subject == "Listing of mountains by elevation":
    print(f"Will used cached Wikipedia data on {subject}")
    self.url_ = "https://en.wikipedia.org/wiki/List_of_mountains_by_elevation"
    with open("mountains.html", "rb") as ifp:
    self.html_ = ifp.learn().decode("utf-8")

    def url(self) -> str:
    return self.url_

    def html(self) -> str:
    return self.html_

    Then, inject this faux object into the runtime context of the agent throughout improvement:

    class FakeTools:
    elev_wiki: wikipedia_tool.WikipediaContent
    def __init__(self):
    self.elev_wiki = FakeWikipediaContent("Listing of mountains by elevation")

    instruments = FakeTools() # Instruments or FakeTools

    l_answer = agent.run_sync(l_question, deps=instruments) # be aware how we're in a position to inject

    This time once we run, the analysis makes use of the cached wikipedia content material:

    Will used cached Wikipedia data on Listing of mountains by elevation
    >> Inform me concerning the tallest mountain in British Columbia?
    Mountain(title='Mount Robson', location='100 km west of Jasper', top=3954.0)
    0.75 : Appropriate mountain recognized;Peak was 0.0m off. Appropriate reply is 3954
    >> Is Mt. Hood straightforward to climb?
    Mountain(title='Mt. Hood', location='50 km ESE of Portland, OR', top=3429.0)
    1.0 : Appropriate mountain recognized;Appropriate metropolis recognized;Peak was 0.0m off. Appropriate reply is 3429
    >> What is the tallest peak within the Enchantments?
    Mountain(title='Mount Stuart', location='Cascades, Washington, US', top=2869.0)
    0 : Fallacious mountain recognized. Appropriate reply is Dragontail
    Common rating: 0.58

    Look fastidiously on the above output — there are totally different errors from the zero-shot instance. In Part #2, the LLM picked Vancouver because the closest metropolis to Mt. Robson and Dragontail because the tallest peak within the Enchantments. These solutions occurred to be appropriate. Now, it picks Jasper and Mt. Stuart. We have to do extra work to repair these errors — however evaluation-driven improvement a minimum of provides us a route of journey.

    Present Limitations

    PydanticAI may be very new. There are a few locations the place it could possibly be improved:

    • There is no such thing as a low-level entry to the mannequin itself. For instance, totally different foundational fashions help context caching, immediate caching, and so on. The mannequin abstraction in PydanticAI doesn’t present a approach to set these on the mannequin. Ideally, we will work out a kwargs approach of doing such settings.
    • The necessity to create two variations of agent dependencies, one actual and one faux, is sort of widespread. It could be good if we have been in a position to annoate a software or present a easy approach to change between the 2 forms of companies throughout the board.
    • Throughout improvement, you don’t want logging as a lot. However if you go to run the agent, you’ll normally wish to log the prompts and responses. Generally, it would be best to log the intermediate responses. The best way to do that appears to be a industrial product referred to as Logfire. An OSS, cloud-agnostic logging framework that integrates with the PydanticAI library could be splendid.

    It’s potential that these exist already and I missed them, or maybe they’ll have been carried out by the point you might be studying this text. In both case, go away a remark for future readers.

    Total, I like PydanticAI — it provides a really clear and Pythonic approach to construct agentic functions in an evaluation-driven method.

    Advised subsequent steps:

    1. That is a type of weblog posts the place you’ll profit from really operating the examples as a result of it describes a strategy of improvement in addition to a brand new library. This GitHub repo comprises the PydanticAI instance I walked by on this put up: https://github.com/lakshmanok/lakblogs/tree/main/pydantic_ai_mountains Observe the directions within the README to attempt it out.
    2. Pydantic AI documentation: https://ai.pydantic.dev/
    3. Patching a Langchain workflow with Mock objects. My “earlier than” answer: https://github.com/lakshmanok/lakblogs/blob/main/genai_agents/eval_weather_agent.py



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Wonderful (and Sometimes Wacky) World of Artificial Intelligence | by Unnameable | Dec, 2024
    Next Article The Future of Recording Meetings, Calls, and More Is Here and You Can Get It for $100
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025
    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Fuel Your Hustle With One-Stop Shopping at Costco

    February 20, 2025

    Elon Musk, Video Game King? Well, Maybe Not.

    January 26, 2025

    Learn How to Delegate Now — or Risk Losing Your Business

    May 4, 2025
    Our Picks

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025

    From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025

    July 1, 2025

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.