Audra Flow

Infrastructure

Audra Flow provides a consistent infrastructure story from a developer's laptop all the way to production on AWS. Local development uses Docker Compose for parity with cloud environments, while production infrastructure is fully codified with Terraform.

Local Development

The local development stack is managed with Docker Compose and a set of Makefile shortcuts. You can run the entire platform or just the backing services depending on your workflow.

Full Stack (Docker Compose)

Starts all services with hot-reload enabled so that code changes are reflected immediately:

make dev

Once running, the following services are available:

ServiceURL
Frontend (Vite)http://localhost:3000
Backend APIhttp://localhost:3001
AI Servicehttp://localhost:8000

Dependencies Only

If you prefer to run the application processes directly on your machine (for faster iteration or debugger support), start only the backing services:

make deps

This brings up PostgreSQL, Redis, and supporting services. You then start the web app and AI service locally using their respective dev commands.

Common Commands

CommandDescription
make devStart full stack with hot-reload
make depsStart only database and Redis
make upStart production-optimised build locally
make logsTail logs from all running services
make downStop all services
make cleanStop services and remove all volumes

Docker Configurations

Three Docker Compose profiles cover every local use case:

ProfileFileUse Case
Developmentdocker-compose.dev.ymlFull stack with hot-reload, source-mounted volumes, and debug logging.
Productiondocker-compose.ymlOptimised, multi-stage builds that mirror the cloud environment for local testing.
Dependencies Onlydocker-compose.deps.ymlPostgreSQL, Redis, and ancillary services only — ideal when running application code natively.

Data is persisted in named Docker volumes (PostgreSQL data, Redis cache, uploaded files) so that a container restart does not wipe local state.

AWS Cloud Deployment

Production workloads run on AWS using a serverless container architecture. The entire stack is provisioned through Terraform, enabling repeatable, auditable deployments.

Core AWS Services

ServicePurpose
ECS FargateServerless container orchestration for the web app and AI service.
RDS PostgreSQLManaged relational database with Multi-AZ failover and automated backups.
ElastiCache (Redis)Session storage, caching, and rate-limit counters.
S3Document and asset storage with server-side encryption.
Application Load BalancerTLS termination, path-based routing, and health-check management.
Secrets ManagerSecure storage for API keys, database credentials, and JWT secrets.
CloudWatchCentralised logging, metrics, and alerting.
ECRPrivate container image registry.

Terraform Modules

Infrastructure is organised into composable Terraform modules, each responsible for a single concern:

  • Networking — VPC, public and private subnets, Internet Gateway, and NAT Gateways.
  • Security — Security Groups and IAM roles scoped to the principle of least privilege.
  • ECR — Container image repositories for each service.
  • RDS — PostgreSQL instance with encryption, automated backups, and optional Multi-AZ.
  • ElastiCache — Redis cluster with TLS and in-transit encryption.
  • ECS — Fargate task definitions, services, auto-scaling rules, and load-balancer target groups.
  • S3 — Document storage buckets with versioning and lifecycle policies.
  • Secrets — AWS Secrets Manager resources for runtime credentials.
  • CloudWatch — Log groups, metric filters, and alarms.

Environment-specific values are supplied via .tfvars files (e.g., dev.tfvars, prod.tfvars), keeping the module code reusable across environments.

Environment Variables

Audra Flow uses a consistent set of environment variables across local and cloud environments:

VariableDescriptionRequired
OPENAI_API_KEYAPI key for OpenAI (primary LLM provider).Yes
DATABASE_URLPostgreSQL connection string (with pgvector).Yes
REDIS_URLRedis connection string for caching and sessions.Yes
JWT_SECRETSecret used to sign authentication tokens.Yes
DEEPSEEK_API_KEYAPI key for DeepSeek (fallback LLM provider).No
CORS_ORIGINSComma-separated list of allowed origins.No

In production, all secrets are stored in AWS Secrets Manager and injected into ECS task definitions at runtime. Locally, a .env file in the project root is automatically loaded by Docker Compose.

Monitoring

Observability is built into the deployment from day one:

  • CloudWatch Logs — all container stdout and stderr is forwarded to CloudWatch log groups, with configurable retention and metric filters for error keywords.
  • CloudWatch Metrics — ALB latency, 5xx error counts, ECS CPU and memory utilisation, and RDS connection counts are tracked automatically.
  • Health-Check Endpoints — each service exposes a /health (or /api/health) endpoint polled by the ALB target group. Unhealthy tasks are automatically replaced by ECS.
  • Grafana (optional) — for teams that prefer a unified dashboard experience, CloudWatch data sources can be connected to a self-hosted or managed Grafana instance.
  • Automated Alerts — CloudWatch alarms notify the operations team when error rates, latency, or resource utilisation exceed defined thresholds.