All compute, databases, and services live in a single VPC with public or minimally-filtered access. Security groups are the only control plane. Common in demos, MVPs, and hackathon-to-company transitions where shipping speed dominates all other concerns.
- Single VPC, one or two subnets
- RDS/databases in public subnet or 0.0.0.0/0 SGs
- SSH via public IP, no bastion
- EC2/containers with public IPs
- Shared AWS account — prod = dev
- Lateral movement is trivial if any host compromised
- Database directly reachable from internet
- No audit trail for network access
- Overly permissive IAM (wildcards common)
- No DDoS protection
- None meaningfully attainable
- SOC 2 prep requires significant rearchitecture
- Complete — full environment compromise likely from any entry point
- All customer data at risk from single breach
Databases and internal services move to private subnets behind NAT. Internet-facing resources (load balancer, CDN) sit in public subnets. Security groups tightened to explicit port/source rules. Often the first "real" architecture after a scare or compliance push.
- Public subnets for LB/NAT only
- Private subnets for app tier and databases
- NAT Gateway for egress (single-AZ)
- ALB/NLB as entry point
- Still likely a single AWS account
- No WAF — app-layer attacks unmitigated
- Single AZ — single NAT = SPOF for egress
- No VPN — admin access still via jump boxes or public exposure
- Minimal logging beyond CloudTrail basics
- No IDS/IPS on traffic
- SOC 2 Type 1 (with supplemental controls)
- HIPAA with BAA — borderline, needs more controls
- High — app-tier compromise still reaches DB
- DB no longer directly internet-exposed
- No east-west traffic controls within private subnet
Security controls added at multiple layers. WAF protects the application perimeter. Engineers access production only via VPN or bastion host. AWS accounts split at minimum into prod/staging/dev. Logging and threat detection active. SOC 2 achievable at this level.
- WAF in front of ALB (OWASP ruleset)
- Separate prod / non-prod AWS accounts
- VPN or bastion for admin access
- VPC Flow Logs → CloudWatch/SIEM
- GuardDuty + Security Hub enabled
- KMS encryption for data at rest
- Still single-region — regional outage = downtime
- No network-layer micro-segmentation
- Manual incident response processes
- No formal data classification in network design
- Bastion is still a privileged SPOF
- SOC 2 Type 1 and Type 2
- ISO 27001 with process work
- HIPAA (with additional controls)
- Medium-high — app compromise still significant
- Non-prod isolated from customer data
- Attack detection now possible
True high availability through active-active deployment across multiple Availability Zones. No single AZ failure causes downtime. Databases replicate synchronously. Auto-scaling covers load spikes. Infrastructure defined entirely in code (Terraform/CDK). Mature secrets management.
- 3 AZs, active-active across all tiers
- RDS Multi-AZ with read replicas
- Auto-scaling groups with health checks
- AWS PrivateLink for service communication
- Secrets Manager / Vault for credential mgmt
- Full IaC (Terraform / CDK / Pulumi)
- AWS Organizations with SCPs
- Single region — regional AWS outage = outage
- No east-west micro-segmentation (flat private network)
- Bastion still common — no zero-trust
- No formal data residency controls
- Perimeter-based security model still dominant
- SOC 2 Type 2 (mature)
- PCI-DSS Level 1 (with segmentation work)
- ISO 27001 certified
- HIPAA / HITRUST
- Medium — AZ failure well-contained
- App compromise still can reach data tier
- Good detection and response capability
Production traffic serves from a primary region with a warm standby in a secondary region. Transit Gateway interconnects VPCs across accounts and regions. Cross-region database replication enables RTO < 15min. Data residency policies enforced by region. Disaster recovery tested regularly.
- Primary region (active) + DR region (warm standby)
- Transit Gateway hub-and-spoke across accounts
- Cross-region RDS read replica promotion path
- Route 53 health-check-driven failover
- AWS Config + Security Hub cross-account
- Data residency tagging and enforcement
- Quarterly DR failover testing
- Not truly active-active — failover has RTO
- Transit Gateway is a centralized chokepoint
- East-west traffic still not micro-segmented
- Identity perimeter model, not zero-trust
- Third-party SaaS integrations often not assessed
- FedRAMP Moderate
- PCI-DSS Level 1
- GDPR data residency controls
- DORA (EU digital operational resilience)
- Low-medium — regional failures auto-recovered
- Security incidents still can spread within region
- Network-level east-west still under-controlled
Network perimeter is abolished. Every service-to-service connection is authenticated and encrypted with mTLS regardless of network location. Workload identity (SPIFFE/SPIRE) replaces IP-based trust. SASE or SSE replaces VPN for human access. Network policies enforced at the workload level.
- Service mesh (Istio/Linkerd/Consul) with mTLS
- SPIFFE/SPIRE for workload identity
- SASE or SSE platform (Zscaler, Cloudflare Access)
- Network policies as code (OPA / network policy CRDs)
- No VPN for human access — identity-proxy only
- eBPF-based network observability (Hubble/Cilium)
- Continuous authorization on every request
- Operationally complex — high eng expertise required
- Service mesh overhead (latency, resource cost)
- Legacy services hard to mesh without refactoring
- Not yet active-active globally for all services
- FedRAMP High / IL4
- NIST 800-207 (Zero Trust Architecture)
- CMMC Level 3
- All previous frameworks at maximum maturity
- Low — lateral movement requires identity compromise
- Each service isolated at network policy level
- Compromised credential = minimal lateral reach
Traffic routes to the nearest Point of Presence via anycast BGP. Active-active deployments across multiple clouds and regions serve requests with sub-20ms latency globally. Custom networking hardware, BGP peering with major ISPs, and global traffic management through proprietary or Cloudflare/Fastly-class infrastructure.
- Global anycast via BGP (own ASN or Cloudflare/Fastly)
- Active-active across 3+ cloud regions
- Multi-cloud (AWS + GCP or Azure)
- Custom CDN or major CDN deeply integrated
- Direct peering at major IXPs (DE-CIX, Equinix)
- Global load balancing with geoproximity routing
- All L6 controls retained and extended
- Extraordinary operational complexity
- Requires dedicated network engineering team
- Multi-cloud data sync and consistency hard
- Regulatory complexity across jurisdictions
- Cost scales very aggressively
- All previous frameworks
- Sovereign cloud / data localization per country
- Custom compliance postures per region
- Minimal — fault domains are geographically isolated
- No single region or cloud failure affects global traffic
- Security incidents contained within cloud boundary