CLASSIFICATION POSTURE · BOUNDARY-CONTROLLED
DEFENCE · AWS & GOVCLOUD
AWS, designed for the boundary your environment actually has.
We build and operate AI-enabled software on commercial AWS and AWS GovCloud-equivalent regions for defence, government, and partner-nation customers. Bedrock, SageMaker, and custom inference paths inside VPCs that hold the boundary your accreditation requires, with the audit trail to back it up.
AWS is broad. The decisions that matter are narrow. Region selection, account topology, network boundary, IAM posture, key management, and the model-serving path are five or six choices that determine whether the system holds the line under audit.
PRACTICE OVERVIEW
AWS for the customer whose boundary is real.
AWS is the most-deployed cloud in the world and the second-most-common landing place for the defence and government customers we work with. The decisions that determine whether an AWS-hosted AI workload meets a customer's accreditation posture are not numerous, but they are unforgiving. RankSaga has shipped AWS deployments for customers who treat the boundary as a real engineering surface, not a procurement artefact.
Our AWS practice covers commercial AWS regions for less-sensitive workloads, GovCloud-equivalent regions where the customer's accreditation pathway requires them, and the partner-nation sovereign AWS arrangements that exist for non-US defence and government customers. The architectural posture is consistent across all three: a boundary-controlled VPC, a customer-managed key fabric, an IAM model that meets the customer's accreditation requirements, and an audit surface, CloudTrail plus customer-controlled application-layer logging, that holds up under review.
The AI surface is, in most engagements, a mix of Bedrock for hosted foundation models, SageMaker for customer-fine-tuned models, and custom inference paths inside the VPC for workloads that require single-tenant model serving. Model selection is workload-driven. Where the customer requires that model artefacts and inference paths never leave a tenant they control, we build accordingly.
We work as the engineering team that designs the AWS posture, builds the application surface on top of it, and operates the deployment continuously. Forward-deployed, end-to-end. The same operating model we run on sovereign Azure and on air-gapped infrastructure, applied to AWS specifics.
WHAT WE DO
AWS for defence, by the surface we touch.
01 / Capability
Boundary-Controlled VPC Architecture
Account topology, VPC segmentation, private endpoints, and explicit egress controls designed for the customer's accreditation posture. Including configurations that align with the customer's data-handling and classification controls.
02 / Capability
Bedrock & Hosted Models
Integration of Bedrock-hosted foundation models into mission applications, with customer-managed keys, customer-controlled logging, and a model selection that fits the workload's residency and auditability requirements.
03 / Capability
SageMaker & Custom Inference
SageMaker for customer-fine-tuned models. Custom inference paths for open-weight models running inside customer VPCs where workload sensitivity requires single-tenant model serving.
04 / Capability
IAM & Key Management
Permission boundaries, role-based access aligned to the customer's authorisation model, KMS key fabric with customer-managed encryption across the boundary, and credential handling that meets the customer's secret-management posture.
05 / Capability
Audit, CloudTrail, and Logging
CloudTrail configured for accreditation-grade audit, application-layer logging into customer-controlled aggregation, and SIEM integration aligned with the customer's security operations posture.
06 / Capability
Operations Inside the VPC
Embedded engineers operating the deployment alongside the customer's platform team, patch lifecycle, model lifecycle, capacity, incident response, and the continuous hardening that keeps a defence AWS deployment running cleanly.
OPERATING MODEL
Boundary first. Build inside it. Operate continuously.
AWS engagements follow the same forward-deployed model we run elsewhere, adjusted for the specifics of AWS region selection, account topology, and the IAM and audit surfaces unique to AWS deployments.
01 / Step
Region & Boundary Posture
Region selection, commercial, GovCloud-equivalent, partner-nation sovereign, and boundary architecture are the first decisions. We design account topology, VPC segmentation, IAM, and key management against the customer's accreditation posture before workloads land.
02 / Step
Build Inside the Boundary
Application surface, mission software, AI services, integration layers, operator UI, is built and deployed inside the boundary-controlled VPC. Working software in operator hands within weeks, against the production controls.
03 / Step
Embedded Operations
We stay deployed. Patching, capacity management, model lifecycle, incident response, and continuous hardening, alongside the customer's platform team for as long as the engagement requires.
POSTURE DETAIL
How we configure AWS deployments for defence work.
Region
Region choice driven by the customer's accreditation pathway: commercial AWS regions for less-sensitive workloads; GovCloud-equivalents for partner-nation customers; sovereign AWS arrangements where they exist.
Account Topology
Multi-account organisation with explicit boundaries between identity, security, audit, and workload accounts. Service control policies aligned to customer authorisation model.
Network
VPC segmentation with explicit egress controls, PrivateLink for AWS service access, and a documented boundary posture reviewed against the customer's accreditation pathway.
Identity & Access
IAM Identity Center / SSO with customer's enterprise directory, permission boundaries, and role-based access into the workload aligned with the customer's authorisation model.
Key Management
Customer-managed KMS keys for every encryption boundary that touches sensitive data, including S3 object stores, EBS volumes, and model artefacts.
Audit & Logging
CloudTrail with customer-controlled retention, application-layer logs into customer-managed aggregation, and SIEM integration consistent with the customer's security operations posture.
POSITIONING
Where we are in production, and where AWS fits.
Our reference defence deployment is on sovereign Microsoft Azure with an air-gapped operating environment. AWS is an adjacent capability we offer where the customer's accreditation pathway, partner-nation arrangements, or service-level requirements make AWS the right answer.
- ·Multi-region commercial AWS for less-sensitive defence-adjacent workloads.
- ·GovCloud-equivalent deployment patterns for partner-nation customers.
- ·Sovereign AWS arrangements where they exist for the customer's jurisdiction.
- ·Boundary-controlled architectures designed for accreditation review.
RELATED CAPABILITIES
AWS, in context.
Adjacent
Microsoft Azure (Sovereign) →
Where sovereign Azure regions are the right landing place, including the configuration our reference ADF deployment runs.
Adjacent
Defence-Grade AI Systems →
When the application running on AWS is an AI system that requires auditability and human-in-the-loop review.
Adjacent
Mission Software Engineering →
When the AWS deployment is one part of a broader operator-facing mission application.
QUESTIONS
AWS for defence, in practice.
Are you US-authorised on FedRAMP or IL5/IL6?+
No. RankSaga is an Australian-headquartered team and our defence operating posture is aligned with Australian and partner-nation frameworks. For US AWS deployments requiring FedRAMP / IL5 / IL6 authorisation, we partner with US-authorised primes who hold those accreditations rather than misrepresent ours.
Can you operate inside GovCloud-equivalent partner-nation regions?+
Yes, that's the most common AWS engagement pattern for our customers. Australian government workloads have a different accreditation pathway than US workloads; we design and operate AWS deployments aligned to the Australian and partner-nation customer's framework.
Bedrock or self-hosted models, how do you decide?+
Workload-driven. Bedrock-hosted foundation models work well where the customer's residency and auditability posture allows hosted inference. For workloads requiring single-tenant serving or where the data cannot leave the customer's tenant, we run open-weight or customer-fine-tuned models on infrastructure inside the customer VPC.
How do you handle audit and accreditation evidence?+
CloudTrail configured for accreditation-grade audit, application-layer logs into customer-controlled aggregation, documented architecture and threat model, and a posture document the customer can put in front of an assessor. The accreditation itself is the customer's process; the artefacts that support it are part of the engagement.
Can the AWS deployment also be air-gapped?+
Air-gapped AWS deployments are an unusual configuration. Where the customer requires a fully disconnected environment, we typically recommend a non-AWS infrastructure choice for that part of the workload, see our air-gapped deployment capability. We can however run hybrid configurations where AWS is one tier and an air-gapped enclave is another.
ENGAGE
If your defence-adjacent workload sits on AWS, the boundary should be engineered like it.
We design AWS deployments for accreditation review and operate them continuously. If you are running, or planning to run, defence, government, or partner-nation workloads on AWS and need an engineering team that takes the boundary seriously, we should talk.
ENGAGE
Bring us in on the problem before it has a name.
We work best when we are embedded early, alongside the team that owns the mission, the data, and the operational risk. Government, commercial enterprise, or defence: if your environment is regulated, sensitive, or air-gapped, that is where we are most useful.