Introduction
Proof-of-concept agents are easy to demo. Production agents must be:
- Containerized
- CI/CD-ready
- Scalable
- Monitored
- Memory-persistent
This article explains how to deploy multi-agent LangGraph and CrewAI systems using Docker, GitHub Actions, and real-world infrastructure practices.
Goals of Production Deployment
Deployment-ready agentic systems must support:
- Continuous integration and code validation
- Secure secret management
- Agent restart and checkpoint recovery
- Logging and performance tracing
- Stateless vs stateful scaling patterns
Project Directory Structure
Start by modularizing your repo:
agentic-ai-app/
├── agents/ # Agent role logic
├── tools/ # External tool interfaces
├── workflows/ # LangGraph or CrewAI pipelines
├── memory/ # Redis, Chroma, LangMem backends
├── infra/ # Logging, retries, HITL, auth
├── app.py # Main entry point
├── requirements.txt
├── .env.template
├── Dockerfile
└── .github/
└── workflows/
└── deploy.yml
Dockerizing the Agent System
Sample Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED=1
CMD ["python", "app.py"]
Build and run locally:
docker build -t agentic-app .
docker run --env-file .env agentic-app
CI/CD with GitHub Actions
Create a simple deployment workflow:
.github/workflows/deploy.yml
name: Deploy Agentic AI App
on:
push:
branches: [ main ]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Lint
run: |
flake8 . --exclude=venv
- name: Run Tests
run: |
pytest tests/
build-docker:
needs: build-and-test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build Docker Image
run: |
docker build -t agentic-app .
Container Deployment Targets
| Platform | Recommendation |
|---|---|
| Docker Compose | Local dev and testing |
| AWS ECS / Fargate | Auto-scalable, integrates with CloudWatch |
| Azure Container Apps | Good for memory-backed agents |
| Google Cloud Run | Serverless deployment for APIs |
| Railway / Render | Great for preview or beta rollouts |
Secret and API Key Management
Never commit API keys. Use:
.envfiles for local runs- GitHub Secrets for CI/CD
- AWS/GCP/Azure vault services for production
Local .env.template
OPENAI_API_KEY=sk-...
SERPAPI_KEY=...
REDIS_URL=redis://localhost:6379
CI/CD and Deployment Pipeline

Agent Resilience at Runtime
Once deployed, agents should support:
- Retry on tool/LLM failure
- Logging to file and remote store
- Agent checkpointing or caching
- Health and readiness endpoints for APIs
Add /healthz and /readiness to your agent service:
from fastapi import FastAPI
app = FastAPI()
@app.get("/healthz")
def health_check():
return {"status": "ok"}
Deployment Scenarios
| Use Case | Strategy |
|---|---|
| Local developer testing | Docker Compose with mocked tools |
| Continuous testing | GitHub Actions with Pytest |
| Memory-aware batch agents | ECS or Azure Container Apps with Redis |
| Customer-facing agents | Cloud Run or Lambda fronted by API Gateway |
| Offline failover agents | Bake logic into Docker with static dependencies |
External Resources
- LangChain Deployment Guide: https://docs.langchain.com/docs/guides/deployment
- CrewAI Deployment Advice: https://docs.crewai.com/deployment
- LangGraph CLI: https://langchain-ai.github.io/langgraph/guides/cli/
- GitHub Actions Docs: https://docs.github.com/actions
- Docker Best Practices: https://docs.docker.com/develop/dev-best-practices/
Conclusion
Production ready agents are more than ReAct prompts. They are codebases, containers, workflows, and runtime-managed systems.
By using Docker, CI/CD pipelines, and cloud-native rollouts, you can ensure your agents are maintainable, scalable, and secure.
Introduction Agentic AI systems introduce new forms of autonomy, decision-making, and chaining. But autonomy without infrastructure safeguards is a recipe for cost...