Overview#
The Pluggable Infrastructure module provides a deployment abstraction layer that allows the platform to run on different cloud and infrastructure environments without changes to application code. Rather than binding services directly to specific cloud primitives, a particular object storage service, a specific queue implementation, or a named AI provider, all infrastructure calls go through a common adapter interface. The appropriate implementation for each primitive is resolved at runtime based on the target deployment configuration, enabling organisations to deploy on Cloudflare's global network, AWS, Google Cloud Platform, generic infrastructure, or fully air-gapped tactical edge environments, and to switch between them through configuration rather than redevelopment.
This capability is particularly relevant for defence and public sector customers who cannot deploy to hyperscale public cloud and require on-premise or private cloud deployments, as well as multi-national organisations that need to run the platform in different cloud regions under different data sovereignty regimes.
Key Features#
- Deployment Target Configuration: Five deployment targets are supported:
cloudflare(the default, using Cloudflare Workers and edge services),aws(Amazon Web Services),gcp(Google Cloud Platform),generic(any standards-compatible infrastructure), andtactical-edge(air-gapped or limited-connectivity environments) - Adapter Registry: A central registry maps each combination of deployment target and infrastructure primitive to the correct factory function, resolving the right implementation at service startup without conditional logic scattered across application code
- Storage Abstraction: Object storage operations (upload, download, list, delete) are fulfilled by the appropriate provider for the target: Cloudflare R2, Amazon S3, Google Cloud Storage, S3-compatible endpoints, or MinIO for tactical-edge deployments
- Key-Value State Abstraction: Ephemeral and session state operations are fulfilled by Cloudflare KV, Amazon DynamoDB, Google Memorystore, or Redis, depending on the target deployment
- Queue Abstraction: Async message publishing and consumption uses Cloudflare Queues, Amazon SQS, Google Pub/Sub, RabbitMQ, or Kafka based on the deployment target and organisational preference
- AI Provider Abstraction: AI inference calls are routed to Cloudflare Workers AI, the appropriate hyperscaler AI service, or a self-hosted model endpoint for environments that require on-premise inference
- Per-Tenant Deployment Configuration: In multi-organisation deployments, each tenant can be assigned a different deployment target, enabling a single platform instance to serve cloud tenants and on-premise tenants simultaneously
- Automatic Fallback: If no adapter is registered for a specific target and primitive combination, the system falls back to the Cloudflare default implementation, ensuring graceful degradation rather than hard failure
- Air-Gapped Tactical Edge Support: The
tactical-edgetarget uses MinIO for object storage, Redis for state, and RabbitMQ or Kafka for queuing, providing a complete, network-independent infrastructure stack for deployments in disconnected or bandwidth-constrained environments
Use Cases#
Public Sector On-Premise Deployment#
A national defence agency cannot use public cloud services for classified workloads. The platform is deployed to the agency's private data centre using the tactical-edge target with MinIO, Redis, and RabbitMQ. Application teams write code once; infrastructure substitution is handled entirely by the adapter layer.
Multi-Region Data Sovereignty Compliance#
A multi-national organisation must ensure that data for EU users remains on EU infrastructure and data for US users remains on US infrastructure. Each tenant is assigned its own deployment target pointing to the appropriate regional cloud provider and storage bucket, with the adapter layer handling routing transparently.
Cloudflare-to-AWS Migration#
An organisation decides to migrate from Cloudflare Workers to AWS Lambda for backend processing. The infrastructure team registers AWS adapters in the deployment configuration and switches the target. Application code is unchanged; the adapter registry resolves all storage, queue, and AI calls to their AWS equivalents.
Mixed Deployment for Connected and Disconnected Units#
A public safety organisation runs connected dispatch centres on Cloudflare and deploys the same platform to field command posts that operate in radio-degraded environments. Field command posts use the tactical-edge target; the central dispatch uses cloudflare. Both run identical application code with infrastructure behaviour determined by configuration at each deployment site.
Self-Hosted AI Inference#
An organisation with data handling restrictions that preclude sending data to external AI APIs deploys a self-hosted inference server and configures the AI adapter to route all AI calls to the internal endpoint. The rest of the application layer is unaware of the substitution.
How It Works#
Diagram
flowchart TD
A[Application Code\ncalls infrastructure primitive] --> B[AdapterRegistry.resolve\ndeployment_target + primitive_type]
B --> C{Deployment Target}
C -->|cloudflare| D[Cloudflare Workers Adapters\nR2, KV, Queues, Workers AI]
C -->|aws| E[AWS Adapters\nS3, DynamoDB, SQS, Bedrock]
C -->|gcp| F[GCP Adapters\nCloud Storage, Memorystore, Pub/Sub, Vertex]
C -->|generic| G[Generic Adapters\nRedis, S3-compatible, Standard Queues]
C -->|tactical-edge| H[Tactical Edge Adapters\nMinIO, Redis, RabbitMQ / Kafka]
C -->|no match| I[Fallback to Cloudflare defaults]
D --> J[Infrastructure Call Executed]
E --> J
F --> J
G --> J
H --> J
I --> JSupported Deployment Targets#
- cloudflare: Cloudflare Workers, R2, KV, Queues, Workers AI, Durable Objects (default)
- aws: Amazon S3, DynamoDB, SQS, Bedrock
- gcp: Google Cloud Storage, Memorystore, Pub/Sub, Vertex AI
- generic: Redis, S3-compatible object storage, standard AMQP or Kafka queues
- tactical-edge: MinIO, Redis, RabbitMQ or Kafka; operates without external internet dependency
Integration#
The Pluggable Infrastructure layer operates beneath all platform services and is transparent to application-layer code. Deployment target selection is set through environment configuration at deployment time, with per-tenant overrides available through the administration console for multi-tenant deployments. Adapters for each target are registered at application startup; adding a new deployment target requires implementing the adapter interface for the required primitives and registering the factory functions, no changes to application service code are required.
Availability#
Pluggable Infrastructure is available on all deployment tiers. Additional infrastructure targets beyond Cloudflare (the default) require configuration of the corresponding cloud or on-premise credentials and are subject to the availability of the target infrastructure environment.
Last Reviewed: 2026-04-14 Last Updated: 2026-04-14