High Performance Computing (HPC) is used for the areas that require high computing powers such as financial analysis, machine learning, and weather prediction.

Data Transfer
- Snowball Edge, Snowmobile
- offline data transfer
- AWS Datasync
- DataSync migrates data from/to AWS.
- AWS Direct Connect (DX)
- A dedicated network connection between on-premise data centers and AWS. It can provide consistent network performance.
Computing and Networking
- EC2 Instances
- Compute-Optimized; Accelerated Computing – GPU, FPGA)
- EC2 Spot Fleets
- Spot instances + on-demand instances
- Enhanced Networking + Cluster Placement group
- Elastic Network Adapter (ENA): up to 100 Gbps
- SR-IOV (Single Route-Input Output Virtualization): a device virtualization for higher IO performance
- Elastic Fabric Adapter (EFA)
- Special network adapter that supports OS-bypass
Instance Attached Storage
- EBS
- up to 64,000 IOPS with provisioned IOPS
- Instance Store Volume
- can scale to millions of IOPS
Network Attached Storage
- S3
- durable object storage
- EFS
- Scale IOPS based on the total size, or use provisioned IOPS
- Amazon FSx for Lustre
- HPC-optimized distributive file system, millions of IOPS
Automation and Orchestration
- AWS Batch
- a fully managed batch-processing service that can run a huge number of batch computing jobs on AWS by scheduling jobs and launching EC2 instances according to the needs.
- ASW ParallelCluster
- Open-source cluster management tools
Visualization
- NICE DCV
- a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device.
- Amazon AppStream 2.0
- a fully managed, non-persistent application and desktop streaming service that allows users to deliver the desktop applications to any computer.