Let’s begin with the time period “agent” itself. Proper now, it’s being slapped on every thing from easy scripts to classy AI workflows. There’s no shared definition, which leaves loads of room for corporations to market primary automation as one thing far more superior. That form of “agentwashing” doesn’t simply confuse prospects; it invitations disappointment. We don’t essentially want a inflexible customary, however we do want clearer expectations about what these techniques are purported to do, how autonomously they function, and the way reliably they carry out.
And reliability is the following large problem. Most of at the moment’s brokers are powered by massive language fashions (LLMs), which generate probabilistic responses. These techniques are highly effective, however they’re additionally unpredictable. They’ll make issues up, go off observe, or fail in refined methods—particularly after they’re requested to finish multistep duties, pulling in exterior instruments and chaining LLM responses collectively. A current instance: Customers of Cursor, a preferred AI programming assistant, have been informed by an automatic help agent that they couldn’t use the software program on a couple of gadget. There have been widespread complaints and stories of customers cancelling their subscriptions. But it surely turned out the policy didn’t exist. The AI had invented it.
In enterprise settings, this type of mistake may create immense harm. We have to cease treating LLMs as standalone merchandise and begin constructing full techniques round them—techniques that account for uncertainty, monitor outputs, handle prices, and layer in guardrails for security and accuracy. These measures might help be certain that the output adheres to the necessities expressed by the consumer, obeys the corporate’s insurance policies concerning entry to info, respects privateness points, and so forth. Some corporations, together with AI21 (which I cofounded and which has obtained funding from Google), are already transferring in that course, wrapping language fashions in additional deliberate, structured architectures. Our newest launch, Maestro, is designed for enterprise reliability, combining LLMs with firm information, public info, and different instruments to make sure reliable outputs.
Nonetheless, even the neatest agent gained’t be helpful in a vacuum. For the agent mannequin to work, totally different brokers must cooperate (reserving your journey, checking the climate, submitting your expense report) with out fixed human supervision. That’s the place Google’s A2A protocol is available in. It’s meant to be a common language that lets brokers share what they will do and divide up duties. In precept, it’s a terrific thought.
In apply, A2A nonetheless falls quick. It defines how brokers speak to one another, however not what they really imply. If one agent says it could possibly present “wind situations,” one other has to guess whether or not that’s helpful for evaluating climate on a flight route. With no shared vocabulary or context, coordination turns into brittle. We’ve seen this downside earlier than in distributed computing. Fixing it at scale is way from trivial.