top of page
Blue Gradient.png

CUSTOM CLOUD AI AGENT USING AWS AGENTCORE

OVERVIEW
To support use cases requiring greater flexibility and engineering control than a low-code platform can provide, we built a custom AI agent using AWS AgentCore, an Amazon Bedrock service for creating, deploying, and monitoring full-sentence conversational AI assistants. AgentCore enabled a modular, extensible architecture with developer-level visibility into security, authentication, memory, and tool integration. The result was a scalable foundation for future, cloud-ready agent workflows.
THE CHALLENGE
Low-code tools simplify agent creation, but they impose guardrails: minimal debugging capability, limited control over behavior, and constraints on integrations. For more sophisticated applications—custom authentication, cloud deployment, cross-session memory, tool orchestration, and standardized workflows—the team needed a framework that exposed the underlying agent architecture.

This introduced engineering challenges including:

• Setting up secure authentication so the agent could be deployed into its designated AWS environment
• Debugging deployment issues tied to cloud infrastructure
• Managing access control and user identity
• Orchestrating modular components instead of relying on a single managed environment
Salesforce Authentication Failure Fix
OUR SOLUTION

We implemented the chatbot using AWS AgentCore, a modular Bedrock service designed to support AI agents end-to-end. AgentCore is model-agnostic, scalable, and compatible with most open-source agent frameworks (e.g., Strands, LangGraph, CrewAI, LlamaIndex). Key concepts incorporated into the implementation included:

1. Core AgentCore Modules

• Runtime: Serverless, secure deployment environment that manages the cloud infrastructure and security model.
• Memory: Persistent, cross-session memory supporting long-term context retention.
• Gateway: Converts APIs and AWS Lambda functions into agent-compatible tools and integrates with external MCP servers.
• Identity: Controls access policies and security permissions for AWS resources and third-party tools.
• Observability: Provides monitoring capabilities for debugging, performance, and behavior tracking.
• Browser: Allows the agent to interact with websites inside a controlled, cloud-based browser environment.
• Code Interpreter: Lets the agent write and execute Python code in a sandbox—useful for complex workflows, KPI analysis, and data transformations.

These modules allowed us to build an agent with fine-grained control over infrastructure, authentication, memory, and tool execution.

2. MCP (Model Context Protocol) Integration

AgentCore leverages MCP, an AWS-created standard for connecting AI agents to external services, trusted data sources, and developer-defined tools. MCP enables:

• A consistent way for the agent to call APIs and tools
• Structured, reliable task execution
• Developer-side control over how an agent should interact with the world
• Clear definition of what information or tools are trustworthy

MCP acts like a system-level “playbook,” ensuring the agent follows deterministic workflows rather than generating arbitrary responses.

3. Development and Setup

• Built an MCP server using FastMCP, a lightweight Python framework with tool-discovery functions.
• Deployed the MCP server to AgentCore Runtime with authentication configured via Amazon Cognito.
• Assigned an Agent ARN (unique identifier) and generated bearer tokens to securely access the MCP server.
• Created an Agent Client using Bedrock and the Strands framework, enabling natural-language interaction with in-session memory and modular toolsets.

KEY FEATURES
    • Fully modular agent architecture assembled from AgentCore modules.

    • Custom authentication and security configuration for cloud deployment.

    • MCP-driven tool and API integration for structured, repeatable tasks.

    • Compatibility with multiple open-source agent frameworks.

    • Ability to support complex analytical workflows using Code Interpreter and browser-based interactions.
GLOBAL IMPACT/RESULTS
    • A far more flexible architecture than low-code tools, enabling deeper customization and advanced capabilities.

    • Scalable deployment through serverless infrastructure.

    • Improved observability, debugging, and developer control over agent behavior.

    • Foundation for long-term AI agent capabilities such as generating multi-year KPI tables or orchestrating multi-step analytical tasks.

    • A clear blueprint for building enterprise-grade AI assistants capable of evolving with business needs.
TECHNOLOGIES & SERVICES

AWS AgentCore (Bedrock) — modular agent development and cloud deployment.
FastMCP — Python framework for building MCP servers.
Cognito — authentication and identity management.
Strands — agent framework used for runtime interactions.
MCP — tool and service integration protocol.

CONCLUSION

The AgentCore project demonstrated the power and flexibility of modular, developer-controlled AI agents. While higher in engineering effort, AgentCore enables capabilities far beyond those possible in low-code environments—unlocking custom integrations, secure cloud deployment, standardized workflows, and scalable infrastructure. This positions the organization to build enterprise-grade AI assistants tailored to complex analytical, operational, and automation use cases.

Get In Touch

Want to learn more about our past work or

explore how we can support your current initiatives?

Reach out today and let Fiduciary Tech be your trusted partner.

Headquarters

1100 106th Avenue NE, Suite 101F
Bellevue, WA 98004
425-998-8505

info@fiduciarytech.com

Seoul Office

Address: Geunshin Building 506-1, 20 Samgae-ro, Mapo-gu, Seoul, 04173, Republic of Korea
02-71
2-2227

info@fiduciarytech.com

fiduciary technology consulting

© 2026 by Fiduciary Technology Solutions 

bottom of page