Infrastructure
Audra Flow provides a consistent infrastructure story from a developer's laptop all the way to production on AWS. Local development uses Docker Compose for parity with cloud environments, while production infrastructure is fully codified with Terraform.
Local Development
The local development stack is managed with Docker Compose and a set of Makefile shortcuts. You can run the entire platform or just the backing services depending on your workflow.
Full Stack (Docker Compose)
Starts all services with hot-reload enabled so that code changes are reflected immediately:
make devOnce running, the following services are available:
| Service | URL |
|---|---|
| Frontend (Vite) | http://localhost:3000 |
| Backend API | http://localhost:3001 |
| AI Service | http://localhost:8000 |
Dependencies Only
If you prefer to run the application processes directly on your machine (for faster iteration or debugger support), start only the backing services:
make depsThis brings up PostgreSQL, Redis, and supporting services. You then start the web app and AI service locally using their respective dev commands.
Common Commands
| Command | Description |
|---|---|
make dev | Start full stack with hot-reload |
make deps | Start only database and Redis |
make up | Start production-optimised build locally |
make logs | Tail logs from all running services |
make down | Stop all services |
make clean | Stop services and remove all volumes |
Docker Configurations
Three Docker Compose profiles cover every local use case:
| Profile | File | Use Case |
|---|---|---|
| Development | docker-compose.dev.yml | Full stack with hot-reload, source-mounted volumes, and debug logging. |
| Production | docker-compose.yml | Optimised, multi-stage builds that mirror the cloud environment for local testing. |
| Dependencies Only | docker-compose.deps.yml | PostgreSQL, Redis, and ancillary services only — ideal when running application code natively. |
Data is persisted in named Docker volumes (PostgreSQL data, Redis cache, uploaded files) so that a container restart does not wipe local state.
AWS Cloud Deployment
Production workloads run on AWS using a serverless container architecture. The entire stack is provisioned through Terraform, enabling repeatable, auditable deployments.
Core AWS Services
| Service | Purpose |
|---|---|
| ECS Fargate | Serverless container orchestration for the web app and AI service. |
| RDS PostgreSQL | Managed relational database with Multi-AZ failover and automated backups. |
| ElastiCache (Redis) | Session storage, caching, and rate-limit counters. |
| S3 | Document and asset storage with server-side encryption. |
| Application Load Balancer | TLS termination, path-based routing, and health-check management. |
| Secrets Manager | Secure storage for API keys, database credentials, and JWT secrets. |
| CloudWatch | Centralised logging, metrics, and alerting. |
| ECR | Private container image registry. |
Terraform Modules
Infrastructure is organised into composable Terraform modules, each responsible for a single concern:
- Networking — VPC, public and private subnets, Internet Gateway, and NAT Gateways.
- Security — Security Groups and IAM roles scoped to the principle of least privilege.
- ECR — Container image repositories for each service.
- RDS — PostgreSQL instance with encryption, automated backups, and optional Multi-AZ.
- ElastiCache — Redis cluster with TLS and in-transit encryption.
- ECS — Fargate task definitions, services, auto-scaling rules, and load-balancer target groups.
- S3 — Document storage buckets with versioning and lifecycle policies.
- Secrets — AWS Secrets Manager resources for runtime credentials.
- CloudWatch — Log groups, metric filters, and alarms.
Environment-specific values are supplied via .tfvars files (e.g., dev.tfvars, prod.tfvars), keeping the module code reusable across environments.
Environment Variables
Audra Flow uses a consistent set of environment variables across local and cloud environments:
| Variable | Description | Required |
|---|---|---|
OPENAI_API_KEY | API key for OpenAI (primary LLM provider). | Yes |
DATABASE_URL | PostgreSQL connection string (with pgvector). | Yes |
REDIS_URL | Redis connection string for caching and sessions. | Yes |
JWT_SECRET | Secret used to sign authentication tokens. | Yes |
DEEPSEEK_API_KEY | API key for DeepSeek (fallback LLM provider). | No |
CORS_ORIGINS | Comma-separated list of allowed origins. | No |
In production, all secrets are stored in AWS Secrets Manager and injected into ECS task definitions at runtime. Locally, a .env file in the project root is automatically loaded by Docker Compose.
Monitoring
Observability is built into the deployment from day one:
- CloudWatch Logs — all container stdout and stderr is forwarded to CloudWatch log groups, with configurable retention and metric filters for error keywords.
- CloudWatch Metrics — ALB latency, 5xx error counts, ECS CPU and memory utilisation, and RDS connection counts are tracked automatically.
- Health-Check Endpoints — each service exposes a
/health(or/api/health) endpoint polled by the ALB target group. Unhealthy tasks are automatically replaced by ECS. - Grafana (optional) — for teams that prefer a unified dashboard experience, CloudWatch data sources can be connected to a self-hosted or managed Grafana instance.
- Automated Alerts — CloudWatch alarms notify the operations team when error rates, latency, or resource utilisation exceed defined thresholds.