When Google introduced their Agent Growth Package (ADK) at Cloud Subsequent in April 2025, I’ll admit — I wasn’t instantly bought.
I spent a weekend with ADK, and let me let you know: this one feels totally different.
It’s polished, formidable, and truthfully, fairly enjoyable to construct with.
Right here’s what Google ADK is, why it caught my consideration, and what constructing my first multi-agent system with it felt like (spoiler: it’s like watching just a little AI crew collaborate inside your browser).
First, What’s Google ADK Anyway?
ADK — quick for Agent Growth Package — is Google’s contemporary tackle how we ought to be constructing multi-agent AI purposes.
It’s open-source, developer-friendly, and primarily based on some critical inside tooling that powers Google’s personal AI techniques.
At its core, ADK offers you:
- Agent-to-agent communication: Brokers can speak to one another by way of a clear, standardized protocol (extra on that later).
- Agent hierarchies: A primary agent can orchestrate a number of sub-agents, every specialised for various duties.
- Constructed-in instruments: Brokers can use Python features or exterior APIs to really do issues — not simply chat.
- A slick dev expertise: Together with a real-time occasion panel to hint precisely what your brokers are doing.
I’ve tried LangChain, CrewAI, AutoGen, and some others. ADK stood out instantly for a couple of causes:
✅ Brokers collaborate “correctly” utilizing an actual Agent-to-Agent (A2A) protocol. No extra hacks the place one agent “hallucinates” the subsequent immediate for an additional agent.
✅ Constructed for hierarchies. You don’t simply have a “chain” — you may have a crew of brokers, every with their very own instruments and personalities.
✅ Insanely good developer instruments. Launch adk internet
, and also you get a ravishing panel displaying each agent message, each instrument name, each choice, step-by-step.
✅ Streaming and multimodal are baked in. You need an agent that talks again to you in real-time? Straightforward.
✅ Manufacturing mindset. Google clearly constructed ADK not only for tinkering, however for deploying actual AI techniques.
Briefly, ADK seems like software program engineering for AI brokers, not simply immediate engineering.
To raised perceive how Google ADK’s agent system works, I explored a fundamental multi-agent setup:
a primary journey planner agent delegating duties to 2 specialist sub-agents — one for climate, one for meals.
The thought was easy:
- If the person requested about climate, the planner would hand off the query to the climate agent.
- If the person requested about eating places, it could delegate to the meals agent.
Every sub-agent had its personal instruments — fundamental Python features like this:
def get_weather(metropolis: str) -> str:
"""Get the present climate for the required metropolis."""
return "Sunny and 25°C in " + metropolis
In ADK, these docstrings are essential — brokers really learn them to grasp how and when to make use of the instruments.
It’s not simply good documentation — it’s a part of the agent’s reasoning.
Establishing brokers was simple:
weather_agent = Agent(
title="weather_agent",
description="Handles weather-related queries.",
mannequin="openai/gpt-4", # through LiteLLM
instruments=[get_weather]
)travel_planner = Agent(
title="travel_planner",
mannequin="gemini-2.0", # Google's Gemini mannequin
description="Plans journeys by delegating to specialists.",
directions="""
Use 'weather_agent' for climate questions.
Use 'food_agent' for restaurant recommendation.
""",
sub_agents=[weather_agent, food_agent]
)
This straightforward instance helped floor some notable findings:
- Clear agent hierarchy issues.
Explicitly telling the primary agent who its sub-agents are (and what they specialise in) made delegation way more dependable. - Instruments have to be well-described.
Since brokers learn instrument descriptions to determine when to make use of them, clear and action-focused docstrings made an actual distinction. - Developer visibility is great.
The ADK occasion panel confirmed each step: agent selections, instrument calls, sub-agent handoffs — making it simple to debug even advanced conversations. - Third-party mannequin integration continues to be tough.
Once I swapped in an area mannequin through Ollama (by way of LiteLLM), I hit a wierd loop the place the primary agent and sub-agent stored handing duties backwards and forwards endlessly.
Seems, native fashions plus LiteLLM aren’t but dealing with agent reasoning and power use as cleanly as Google’s fashions.
- Clear agent-subagent relationships = higher reasoning
- Effectively-described instruments = smarter selections
- Google fashions = smoother expertise
- Ollama/native fashions = anticipate rising pains (and humorous infinite loops)
- Inbuilt instruments like looking internet helps google fashions.