// Cloud Security Assessment Framework

Cloud Network
Maturity Catalog

A maturity-based reference for evaluating the network architecture sophistication of SaaS companies. Use these patterns to assess vendor security posture, blast radius, and operational readiness during security reviews.

L1 — Flat VPC Seed / Pre-product
L2 — Segmented VPC Early Startup
L3 — Layered Defense Series A
L4 — HA Multi-AZ Series B
L5 — Multi-Region Series C
L6 — Zero-Trust Enterprise / Late Stage
L7 — Global Fabric Hyperscale
Filter:
Level 1 Seed / Pre-Product
Flat VPC
Single VPC · Everything public · No segmentation

All compute, databases, and services live in a single VPC with public or minimally-filtered access. Security groups are the only control plane. Common in demos, MVPs, and hackathon-to-company transitions where shipping speed dominates all other concerns.

What you'll see
  • Single VPC, one or two subnets
  • RDS/databases in public subnet or 0.0.0.0/0 SGs
  • SSH via public IP, no bastion
  • EC2/containers with public IPs
  • Shared AWS account — prod = dev
Security risks
  • Lateral movement is trivial if any host compromised
  • Database directly reachable from internet
  • No audit trail for network access
  • Overly permissive IAM (wildcards common)
  • No DDoS protection
Compliances possible
  • None meaningfully attainable
  • SOC 2 prep requires significant rearchitecture
Blast radius
  • Complete — full environment compromise likely from any entry point
  • All customer data at risk from single breach
// Assessment signals — what to ask / look for
DB in public subnet? SGs with 0.0.0.0/0? Single AWS account? No VPN/bastion? Shared prod/dev? No CloudTrail?
Availability
1/10
Security
1/10
Observability
1/10
Ops Complexity
1/10
INTERNET VPC — 10.0.0.0/16 public-subnet-10.0.0.0/24 App Server Public IP Database ⚠ Public IP Worker / Cron S3 Bucket Security Group 0.0.0.0/0 → port 22, 5432, 3306 ⚠ No network isolation — everything reachable
Blast Radius
CRITICAL
→ Triggers move to L2
First enterprise prospect asks security questionnaire. Any SOC 2 prep engagement. First security incident. Hiring a security-minded engineer.
Level 2 Early Startup · Seed-A
Basic Subnet Segmentation
Public + Private subnets · NAT Gateway · Security group cleanup

Databases and internal services move to private subnets behind NAT. Internet-facing resources (load balancer, CDN) sit in public subnets. Security groups tightened to explicit port/source rules. Often the first "real" architecture after a scare or compliance push.

What you'll see
  • Public subnets for LB/NAT only
  • Private subnets for app tier and databases
  • NAT Gateway for egress (single-AZ)
  • ALB/NLB as entry point
  • Still likely a single AWS account
Remaining risks
  • No WAF — app-layer attacks unmitigated
  • Single AZ — single NAT = SPOF for egress
  • No VPN — admin access still via jump boxes or public exposure
  • Minimal logging beyond CloudTrail basics
  • No IDS/IPS on traffic
Compliances achievable
  • SOC 2 Type 1 (with supplemental controls)
  • HIPAA with BAA — borderline, needs more controls
Blast radius
  • High — app-tier compromise still reaches DB
  • DB no longer directly internet-exposed
  • No east-west traffic controls within private subnet
// Assessment signals
Public/private subnet split? DB in private subnet? Multi-AZ NAT? VPC Flow Logs enabled? GuardDuty on? CloudTrail all-regions?
Availability
3/10
Security
3/10
Observability
2/10
Ops Complexity
2/10
INTERNET VPC — 10.0.0.0/16 Public Subnet 10.0.1.0/24 ALB App Load Balancer NAT GW Single AZ Private Subnet 10.0.2.0/24 App Tier EC2 / ECS RDS Private only Elasticache DB isolated · App-layer still unprotected
Blast Radius
HIGH
→ Triggers move to L3
SOC 2 Type 2 audit. First CISO hire. Enterprise customer with strict vendor requirements. PCI-DSS scoping begins.
Level 3 Series A · SOC 2 Ready
Layered Defense
WAF · Bastion/VPN · Separate accounts · Flow logs · GuardDuty

Security controls added at multiple layers. WAF protects the application perimeter. Engineers access production only via VPN or bastion host. AWS accounts split at minimum into prod/staging/dev. Logging and threat detection active. SOC 2 achievable at this level.

What you'll see
  • WAF in front of ALB (OWASP ruleset)
  • Separate prod / non-prod AWS accounts
  • VPN or bastion for admin access
  • VPC Flow Logs → CloudWatch/SIEM
  • GuardDuty + Security Hub enabled
  • KMS encryption for data at rest
Remaining gaps
  • Still single-region — regional outage = downtime
  • No network-layer micro-segmentation
  • Manual incident response processes
  • No formal data classification in network design
  • Bastion is still a privileged SPOF
Compliances achievable
  • SOC 2 Type 1 and Type 2
  • ISO 27001 with process work
  • HIPAA (with additional controls)
Blast radius
  • Medium-high — app compromise still significant
  • Non-prod isolated from customer data
  • Attack detection now possible
// Assessment signals
WAF configured? Multi-account org? VPN for admin? GuardDuty active? SIEM ingesting logs? KMS key rotation? Patch mgmt process?
Availability
4/10
Security
5/10
Observability
5/10
Ops Complexity
4/10
INTERNET AWS WAF + CloudFront / Shield OWASP Top 10 · Rate limiting · DDoS PROD VPC (separate account) Public ALB NAT GW Bastion Private — App App / ECS Workers Private — Data RDS ElastiCache S3 VPC EP GuardDuty · Security Hub · CloudTrail · Flow Logs → SIEM VPN only SOC 2 achievable · Single region still a risk
Blast Radius
MEDIUM-HIGH
→ Triggers move to L4
99.9% SLA commitment to customers. First enterprise contract with uptime clauses. Engineering org > 20 people. PCI-DSS or FedRAMP scoping.
Level 4 Series B · HA Production
Multi-AZ High Availability
Active-Active AZs · Private Link · Service isolation · IaC-driven

True high availability through active-active deployment across multiple Availability Zones. No single AZ failure causes downtime. Databases replicate synchronously. Auto-scaling covers load spikes. Infrastructure defined entirely in code (Terraform/CDK). Mature secrets management.

What you'll see
  • 3 AZs, active-active across all tiers
  • RDS Multi-AZ with read replicas
  • Auto-scaling groups with health checks
  • AWS PrivateLink for service communication
  • Secrets Manager / Vault for credential mgmt
  • Full IaC (Terraform / CDK / Pulumi)
  • AWS Organizations with SCPs
Remaining gaps
  • Single region — regional AWS outage = outage
  • No east-west micro-segmentation (flat private network)
  • Bastion still common — no zero-trust
  • No formal data residency controls
  • Perimeter-based security model still dominant
Compliances achievable
  • SOC 2 Type 2 (mature)
  • PCI-DSS Level 1 (with segmentation work)
  • ISO 27001 certified
  • HIPAA / HITRUST
Blast radius
  • Medium — AZ failure well-contained
  • App compromise still can reach data tier
  • Good detection and response capability
// Assessment signals
Multi-AZ confirmed? RDS Multi-AZ? IaC for all infra? Secrets Manager? PrivateLink used? AWS SCPs in place? Runbook for DR? RTO/RPO defined?
Availability
7/10
Security
6/10
Observability
6/10
Ops Complexity
5/10
INTERNET WAF → ALB (multi-AZ) us-east-1 AZ-a AZ-b AZ-c App App App RDS Primary WRITER RDS Standby sync replica Read Replica Cache Cache Cache NAT GW NAT GW NAT GW 99.9%+ SLA achievable · Single region risk remains
Blast Radius
MEDIUM
→ Triggers move to L5
99.99% SLA required. First international expansion. Regulated industry customers (finance, healthcare) requiring DR proof. ARR > $10M with uptime-sensitive workloads.
Level 5 Series C · Multi-Region
Multi-Region Active-Passive
Transit Gateway · Cross-region replication · DR failover · Data residency

Production traffic serves from a primary region with a warm standby in a secondary region. Transit Gateway interconnects VPCs across accounts and regions. Cross-region database replication enables RTO < 15min. Data residency policies enforced by region. Disaster recovery tested regularly.

What you'll see
  • Primary region (active) + DR region (warm standby)
  • Transit Gateway hub-and-spoke across accounts
  • Cross-region RDS read replica promotion path
  • Route 53 health-check-driven failover
  • AWS Config + Security Hub cross-account
  • Data residency tagging and enforcement
  • Quarterly DR failover testing
Remaining gaps
  • Not truly active-active — failover has RTO
  • Transit Gateway is a centralized chokepoint
  • East-west traffic still not micro-segmented
  • Identity perimeter model, not zero-trust
  • Third-party SaaS integrations often not assessed
Compliances achievable
  • FedRAMP Moderate
  • PCI-DSS Level 1
  • GDPR data residency controls
  • DORA (EU digital operational resilience)
Blast radius
  • Low-medium — regional failures auto-recovered
  • Security incidents still can spread within region
  • Network-level east-west still under-controlled
// Assessment signals
Multi-region VPCs? Transit Gateway? RTO/RPO tested? DR runbooks current? Cross-region replication? Route 53 health checks? Data residency enforced?
Availability
8/10
Security
7/10
Observability
7/10
Ops Complexity
6/10
Route 53 Health-Check Failover us-east-1 PRIMARY (active) Multi-AZ App RDS Primary Private services Cache Queues eu-west-1 DR (warm standby) Multi-AZ App (min) RDS Replica Transit GW Hub + Spoke async repl. Security Account: Config · Security Hub · CloudTrail · SIEM RTO < 15min · RPO < 5min · 99.95% SLA
Blast Radius
LOW-MEDIUM
→ Triggers move to L6
Zero-trust mandate from enterprise customers. Supply chain security concerns. Insider threat program. FedRAMP High or IL4/5 targeting. Service mesh for microservices at scale.
Level 6 Late Stage · Enterprise
Zero-Trust Network
Service mesh · mTLS everywhere · Identity-based segmentation · SASE

Network perimeter is abolished. Every service-to-service connection is authenticated and encrypted with mTLS regardless of network location. Workload identity (SPIFFE/SPIRE) replaces IP-based trust. SASE or SSE replaces VPN for human access. Network policies enforced at the workload level.

What you'll see
  • Service mesh (Istio/Linkerd/Consul) with mTLS
  • SPIFFE/SPIRE for workload identity
  • SASE or SSE platform (Zscaler, Cloudflare Access)
  • Network policies as code (OPA / network policy CRDs)
  • No VPN for human access — identity-proxy only
  • eBPF-based network observability (Hubble/Cilium)
  • Continuous authorization on every request
Remaining gaps
  • Operationally complex — high eng expertise required
  • Service mesh overhead (latency, resource cost)
  • Legacy services hard to mesh without refactoring
  • Not yet active-active globally for all services
Compliances achievable
  • FedRAMP High / IL4
  • NIST 800-207 (Zero Trust Architecture)
  • CMMC Level 3
  • All previous frameworks at maximum maturity
Blast radius
  • Low — lateral movement requires identity compromise
  • Each service isolated at network policy level
  • Compromised credential = minimal lateral reach
// Assessment signals
Service mesh deployed? mTLS for all svc-svc? No VPN (SASE/SSE)? SPIFFE workload identity? Network policy as code? eBPF observability? Continuous authz?
Availability
9/10
Security
9/10
Observability
9/10
Ops Complexity
9/10
SASE / SSE (Cloudflare Access / Zscaler) IdP + SPIFFE/SPIRE OPA Policy Engine Service Mesh — mTLS enforced on all paths API Gateway spiffe://svc/api Auth Service spiffe://svc/auth User Service spiffe://svc/user mTLS mTLS Data Service policy: auth,user only Payment Svc policy: api only Cilium / eBPF — full L7 network observability every flow logged · anomaly detection · Hubble Envoy sidecar on every workload — cert rotation every 24h No implicit trust · identity = perimeter
Blast Radius
LOW
→ Triggers move to L7
Global user base with latency SLAs. ARR > $100M. Multi-cloud strategy. Need for anycast routing and global traffic steering beyond single-cloud capability.
Level 7 Hyperscale · Global Platform
Global Anycast Fabric
Multi-cloud · Global PoPs · Anycast · Active-active · Custom network infra

Traffic routes to the nearest Point of Presence via anycast BGP. Active-active deployments across multiple clouds and regions serve requests with sub-20ms latency globally. Custom networking hardware, BGP peering with major ISPs, and global traffic management through proprietary or Cloudflare/Fastly-class infrastructure.

What you'll see
  • Global anycast via BGP (own ASN or Cloudflare/Fastly)
  • Active-active across 3+ cloud regions
  • Multi-cloud (AWS + GCP or Azure)
  • Custom CDN or major CDN deeply integrated
  • Direct peering at major IXPs (DE-CIX, Equinix)
  • Global load balancing with geoproximity routing
  • All L6 controls retained and extended
Remaining gaps
  • Extraordinary operational complexity
  • Requires dedicated network engineering team
  • Multi-cloud data sync and consistency hard
  • Regulatory complexity across jurisdictions
  • Cost scales very aggressively
Compliances achievable
  • All previous frameworks
  • Sovereign cloud / data localization per country
  • Custom compliance postures per region
Blast radius
  • Minimal — fault domains are geographically isolated
  • No single region or cloud failure affects global traffic
  • Security incidents contained within cloud boundary
// Assessment signals
Own ASN? IXP peering? Multi-cloud infra? Active-active globally? Custom CDN layer? Dedicated NetEng team? Global latency SLAs?
Availability
10/10
Security
9/10
Observability
9/10
Ops Complexity
10/10
Anycast BGP · Own ASN · IXP Peering US-EAST AWS us-east-1 Active-active US-WEST AWS us-west-2 GCP us-central EU AWS eu-west-1 Azure westeu APAC AWS ap-southeast-1 GCP asia-northeast LATAM AWS sa-east-1 Cloudflare PoP Global Control Plane · GSLB · Geoproximity routing · Latency-based failover L6 Zero-Trust + mTLS applied across all PoPs and clouds Global observability · Unified SIEM · SLO tracking per region 99.99%+ SLA · <20ms global p50 · No single cloud dependency
Blast Radius
MINIMAL
→ Top of maturity ladder
Companies at this level: Cloudflare, Stripe, Datadog, Fastly. Complexity is now the primary operational risk — simplification becomes a goal.