AWS's new A1 instances are the first to be powered by its custom Graviton Arm-based processors. According to the company, they can reduce costs by up to 45% for scale-out workloads such as microservices and web servers.
These savings are available for workloads that aren't tied to the x86 family of CPUs.
This is claimed to be the first time that Arm processors have been available in the cloud.
New P3dn and C5dn instances feature 100 Gbps networking for increased throughput for scale-out, distributed workloads including machine learning and high performance computing (HPC).
P3dn instances, when they become available next week, will be the most powerful GPU instances in the cloud for machine learning training, AWS claimed. Machine learning models that currently take a few hours to train can be trained in less than an hour by networking multiple P3dn instances.
They also feature NVMe instance storage, custom Intel CPUs with 96 vCPUs and support for AVX512 instructions, and Nvidia Tesla V100 GPUs each with 32 GB of memory.
C5n instances (available immediately) offer 100Gbps of network bandwidth, giving four times the throughput of C5 instances, allowing previously network-bound applications to scale up or scale out effectively. The bandwidth can also be used to accelerate data transfer to and from Amazon S3, reducing the ingestion wait time for applications and speeding up delivery of results.
Teradata director of strategic offering management for cloud Abhishek Lal said "Teradata IntelliCloud, our as-a-service offering for analytics at scale, enables customers to seamlessly scale up and out to meet their workload requirements.
"Teradata software demands extremely high I/O to maximise its potential. With a 4x improvement in network performance, we expect Amazon EC2 C5n instances to significantly improve throughput for IntelliCloud, empowering customers to generate analytic insights and business-changing answers at ever-faster rates."
Other networking improvements are provided by the latency-optimised Elastic Fabric Adapter (for scaling tightly-coupled HPC applications across tens of thousands of cores) and the AWS Global Accelerator (for improving the availability and performance of geographically distributed applications by intelligently routing Internet traffic.
AWS vice-president of compute services Matt Garman said "Two of the requests we get most from customers are how can you help us keep lowering our costs for basic workloads, and how can you make it more efficient to run our demanding, scale-out, high performance computing and machine learning workloads in the cloud.
“With today’s introduction of A1 instances, we’re providing customers with a cost optimised way to run distributed applications like containerised microservices.
"A1 instances are powered by our new custom-designed AWS Graviton processors with the Arm instruction set that leverages our expertise in building hyperscale cloud platforms for over a decade.
"For scale-out distributed workloads, our new P3dn instances and C5n instances offer 100Gbps networking performance to speed distributed machine learning training and high performance computing.
"These new instance launches expand what's already the industry’s most powerful and cost-effective computing platform to meet the needs of new and emerging workloads."
Disclosure: The writer attended AWS re:Invent as a guest of the company.