Control how EC2 instances are placed on underlying hardware to meet specific performance or availability needs.
Strategies:
- Cluster: All instances close together in one AZ for low latency & high throughput.
- Spread: Instances placed on distinct hardware to minimize correlated failures (max 7 per AZ).
- Partition: Instances grouped into partitions across racks for isolation at scale.
Cluster Placement Group
- Pros: Extremely low latency, up to 10 Gbps throughput (with Enhanced Networking).
- Cons: Single AZ failure impacts all instances.
- Use cases: Big Data processing, HPC, tightly coupled workloads.
Spread Placement Group
- Pros: Instances on separate physical hardware, can span AZs, high availability.
- Cons: Limit of 7 instances per AZ.
- Use cases: Critical workloads needing fault isolation.
Partition Placement Group
- Limits: Up to 7 partitions per AZ, can span multiple AZs, hundreds of instances.
- Behavior: Partitions have distinct racks; partition failure affects only that partition. Partition info exposed via instance metadata.
- Use cases: Distributed systems (HDFS, Cassandra, Kafka).