Tool-Using AI Agents - Balancing Competence, Competition and Risk

Tool-using AI agents capable of interacting with well-constructed APIs already exist. In this article we examine how these agents are likely to evolve, the tension between competitive speed and governance risk, and the emerging role of standards like MCP and A2A.

Tool-Using AI Agents - Balancing Competence, Competition and Risk

The next generation of AI agents are no longer passive models waiting for user input. They are active, capable entities that can use external tools and APIs to accomplish complex tasks. These tool-using agents do not simply generate responses - they can make decisions, interact with systems, and trigger real-world actions.

With access to structured documentation and well-designed prompts, today's AI models can already:

  • Understand and apply HTTP response codes - these codes indicate whether a request has succeeded, failed, or needs further action.
  • Parse OpenAPI/Swagger specifications - these specifications describe how APIs work, including available endpoints, input parameters, and expected outputs.
  • Compose API calls using tools like curl - a command-line utility that lets agents make HTTP requests to APIs.
  • Handle pagination, authentication, and error responses intelligently - navigating large datasets, securing access, and recovering from temporary failures.

These capabilities mean that agents can already perform meaningful, real-world tasks with minimal human oversight.

However, this technical competence introduces a new tension between two forces:

  • Competitive Pressure - Companies want to move quickly, deploying powerful autonomous agents to gain market advantage.
  • Risk and Governance Pressure - The need to manage operational, reputational, and regulatory risks grows as agents gain autonomy.

Both forces are real. Both will shape the near future of AI agent development.


Governance Challenges

Traditional API gateways - including products like Amazon API Gateway, Apigee, Azure API Management and Kong - provide strong external controls:

  • Authentication and authorization
  • Request schema validation
  • Rate limiting and quota management
  • Logging and monitoring

These are vital tools, and they remain effective. They ensure that only validated, authorized calls reach back-end systems.

But they operate mainly at the network and transport layers. They answer the question, "Can this request be made securely?"

In an agent-driven world, a harder question emerges: "Should this business action happen at all?"

This question cannot be answered by validating JSON payloads or enforcing OAuth scopes alone. It requires governance that understands context, intent, and dynamic business rules.

This is a gap that protocols like MCP (Model Context Protocol) and a2a (agent-2-agent) aim to fill.

Introducing MCP and a2a

MCP and a2a are emerging standards designed specifically for the governance of AI agents interacting with external systems. They move beyond traditional API management by providing a structured way for agents to:

  • Discover available capabilities
  • Understand the conditions and constraints attached to those capabilities
  • Request actions in a controlled, auditable manner

MCP, pioneered by Anthropic, and a2a , advanced by Google DeepMind, both aim to create a safer, more predictable environment for agent operations. Rather than relying solely on traditional API documentation or endpoint security, these protocols offer a machine-readable interface where permissible actions are clearly defined and enforced.

This allows organizations to retain control over autonomous systems, ensuring that agents operate within authorized boundaries even as they make decisions independently.


How MCP and a2a Support Governance of AI Agents

These protocols are designed to govern the way autonomous agents interact with tools and systems at a much higher level of abstraction than traditional API management. They provide structured, machine-readable descriptions of what capabilities an agent can use, under what conditions, and with what constraints.

These standards help ensure that:

  • Agents act within clearly defined boundaries based on business context.
  • Constraints and preconditions are enforced before actions are attempted.
  • Capabilities can be dynamically enabled, modified, or revoked based on real-time policy changes.
  • Every decision made by an agent is transparent and auditable at the level of intent and action, not just API calls.

By moving governance to the semantic layer, MCP and a2a help prevent misuse, reduce operational risk, and align agent actions with organizational goals and regulatory requirements. This capability-based governance is essential as agents gain greater autonomy and operate with less human supervision.


Reality Check - Existing Tools Will Get You Started

Today, for many internal or supervised applications, strong API documentation, clear business policies, and robust prompts provide adequate governance. Agents operating in narrow, controlled environments can safely use APIs directly when:

  • Documentation is accurate and complete
  • Business rules are simple and stable
  • Human supervision is continuous or frequent

In these environments, adding formal capability governance through MCP or a2a may not be strictly necessary. Traditional API gateways and internal access controls can provide enough protection against common mistakes.

However, as agents become:

  • More autonomous
  • More compositional
  • Capable of making multi-step decisions without human checkpoints

relying solely on documentation and prompts becomes increasingly risky. Agents will need clearer boundaries, stronger enforcement of business rules, and more dynamic control over what actions are permissible.

MCP and a2a are not essential for all agents today. But for any organization looking to scale agent autonomy across critical workflows or regulated environments, protocols like these will become an important part of future-proof governance strategies.

Executives should begin assessing when their environments will cross the threshold from "documentation is enough" to "capability governance is essential," particularly in light of new regulatory pressures such as the EU AI Act and Australia's emerging AI ethics standards.


A Governance Stack for Agent-Driven Systems

As AI agents move from executing simple tasks to making autonomous, complex decisions, governance must operate across multiple layers of system architecture. Each layer plays a specific role in securing and managing agent behavior.

Here is the full governance stack needed to manage agent interactions safely and sustainably:

LayerGoverned AspectPurposeTools/Examples
5. Semantic CapabilitiesBusiness actions, intents, real-world constraintsEnsure agents perform only contextually correct and authorized actionsMCP, A2A, Capability Graphs
4. API Business LogicDomain-specific enforcement, workflows, business rulesValidate preconditions and enforce policiesAPI server logic, backend services
3. API GatewayAPI access control, request validation, traffic governanceAuthenticate, rate-limit, schema-validate HTTP callsAmazon API Gateway, Apigee, Kong, Azure API Management
2. Transport LayerMessage transmission integritySecure communication, prevent tamperingHTTPS, TLS, OAuth2, mTLS
1. Network LayerPhysical/data transmissionRouting, IP addressing, packet securityTCP/IP, VPNs, Firewalls

Layer 1 and 2 secure the movement of data across networks.

Layer 3 ensures that only approved requests reach backend systems.

Layer 4 enforces business-specific rules, ensuring agents do not bypass critical workflow checks.

Layer 5, the most recent and vital addition, governs what agents are even allowed to attempt based on business context, current conditions, and organizational policy.

Ignoring the higher layers - especially Layers 4 and 5 - leaves organizations vulnerable to failures of judgment, process violations, and ultimately serious operational or regulatory breaches.

Building governance across all five layers will be key to safely scaling agent-based automation.


Strategic Outlook

Over the next few years, two competing dynamics will define agent deployment strategies:

ForceOutcome
Competitive urgency to deploy autonomous agentsFaster rollout, higher operational risk
Growing demand for risk management and governanceAdoption of semantic governance standards

Executives must be ready to navigate both pressures. Those who move quickly without governance will expose themselves to regulatory, operational, and reputational dangers. Those who over-govern early may lose competitive ground.

The right path balances innovation speed with governance maturity - adjusting dynamically as the environment evolves.

Final Thoughts

In the era of autonomous AI agents, securing decisions will matter more than securing connections. Enterprises that build trustworthy, governable agents will lead the next chapter of AI-driven business transformation.

Subscribe

AI Agent news, advice and commentary for your inbox.
jamie@example.com
Subscribe