New EC2 instances
New AWS Graviton2-powered, Arm-based C6gn instances deliver 100 Gbps networking performance for compute-intensive workloads, providing 40% better price performance than comparable x86-based instances.
C6gn instances will be available later this month in eight sizes providing up to 64 vCPUs, 100 Gbps of network bandwidth, and 38 Gbps of EBS bandwidth.
Graphics-optimised G4ad instances powered by AMD GPUs and CPUs provide up to 45% better price performance than Nvidia GPU-based G4dn instances.
G4ad instances with 1, 2, or 4 GPUs will be available in the next few days.
General purpose M5zn instances powered by the fastest Intel Xeon Cascade Lake CPUs provide the highest single-threaded performance from Cascade Lake processors available in the cloud, while avoiding the over-provisioning of memory and local storage that comes from using z1d instances for tasks such as complex calculations and real-time analysis for financial, analytics, and gaming workloads.
M5zn instances are available in seven sizes with up to 48 vCPUs and 192GB memory, offering up to 45% better per-core performance compared to M5 instances. D3 instances also deliver up to 30% better processing performance and up to 2.5x higher network performance than previous generation D2 instances.
D3 instances feature Cascade Lake processors with up to 48TB of storage, 32 vCPUs, 256GB of memory, and 25Gbps of network bandwidth. The D3en variant provides 336TB of storage, 75Gbps of network bandwidth, and up to 6.2GBps of disk throughput, allowing the consolidation of high capacity big data analytical workloads using software such as Redshift, Kafka, Hadoop and Elasticsearch.
New memory-optimised R5b instances for EBS deliver up to 60Gbps of bandwidth and 260,000 IOPS of instance-to-EBS performance for demanding database workloads using software such as SAP, Oracle, and Microsoft SQL Server. R5b instances are up to three times faster than same-sized R5 instances.
mac1.metal instances provide a bare-metal Mac mini in the cloud, primarily for developing and testing software for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari.
Smaller AWS Outposts form factors – 1U and 2U – take up less space and require significantly less power and network connectivity than the full 42U AWS Outposts, making them suitable for branch offices, shops, mobile towers.
The 1U version features 64 vCPUs, 128GB memory, and 4TB of local NVMe storage, while the 2U form version accommodates up to 128 vCPUs, 512GB memory, and 8TB of local NVMe storage.
Both run AWS services including EC2, ECS, EKS and VPC.
Outposts are remotely managed by AWS, giving the convenience of fully-managed systems with the advantages of on-premises operation.
These new AWS Outposts will be available in 2021.
EBS Block Express is said to be the first SAN built for the cloud, designed for the largest, most I/O intensive mission-critical deployments of Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics, A single io2 volume can now be provisioned with up to 256,000 IOPS, 4,000 MBps throughput, 64TB of capacity, and consistent sub-millisecond latency. Multiple io2 Block Express volumes can be striped for even better performance.
Additional SAN features will be added to Block Express volumes in coming months, including multi-attach with I/O fencing, Fast Snapshot Restore, and Elastic Volumes.
EBS Gp3 volumes allow customers to get the performance needed for a range of common applications without having to over-provision on capacity. The new volume type allows IOPS and throughput to be provisioned separately from storage capacity. Customers can easily migrate Gp2 volumes to lower-cost Gp3 volumes without interrupting EC2 instances by using the Elastic Volumes feature.
S3 Intelligent-Tiering automatically optimises customers' storage costs for data with unknown or changing access patterns. AWS says it is the first and only cloud storage solution to provide dynamic pricing automatically based on the changing access patterns of individual objects in storage. This to some extent eliminates the need for customers to build their own applications to monitor and record access to individual objects, determine which objects were rarely accessed and needed to be moved to archive, and then actually move them.
S3 Intelligent-Tiering now provides automatic tiering and dynamic pricing across Frequent, Infrequent, Archive, and Deep Archive tiers, saving up to 95% on storage that is automatically moved from Frequent access to Deep Archive for 180 days or more.
Amazon S3 Replication (multi-destination) replicates data to multiple buckets within the same AWS Region, across multiple AWS Regions, or a combination of both.
Amazon Aurora Serverless v2 (in preview) provides the ability to scale database workloads to hundreds of thousands of transactions in a fraction of a second, the company claimed. Capacity increases occur in small increments. so customers only pay for the capacity they consume. This can reduce database cost by up to 90% compared to provisioning for peak capacity.
It is now suitable for a broader set of applications, including multi-tenant SaaS products.
Babelfish for Aurora PostgreSQL allows SQL Server applications to run directly on Amazon Aurora with few to no code changes. It understands Microsoft's T-SQL dialect as well as SQL Server's network protocol.
The source code for Babelfish for Aurora PostgreSQL will be made available in 2021 under the Apache 2.0 licence.
"With today's announcement of the next generation of Amazon Aurora Serverless and Babelfish, we are making it even easier for customers to leave the constraints of old-guard databases behind, enjoy the immense cost advantages of open source database engines, and choose the right database for the right job," said AWS vice president for databases Shawn Bice.
Amazon ECS Anywhere (available in the first half of 2021) provides provides customers with consistent tooling and APIs for all container-based applications, and the same Amazon ECS experience for cluster management, workload scheduling, and monitoring both in the cloud and in their own data centres, according to AWS officials.
This saves customers having to run, update and maintain their own container orchestrators on-premises.
Amazon EKS Anywhere (available in the first half of 2021) does much the same thing for Kubernetes. It works on bare metal, VMware vSphere, or cloud virtual machine infrastructure, provides consistent Kubernetes management tooling. It will be . To learn more, visit
AWS Proton (in preview) simplifies the provisioning, deployment and monitoring of applications with small and dynamic units of compute, such as containers and serverless systems. It comes with a set of curated application stacks with built-in AWS best practices for security, architecture and tools, so infrastructure teams can quickly and easily distribute trusted stacks to development teams.
AWS Proton also helps ensure these stacks stay standardised and up-to-date even as multiple teams deploy stacks simultaneously. It automates the deployment of infrastructure as code, CI/CD pipelines, and monitoring for container and serverless applications.
Amazon ECR (Elastic Container Registry) has gained a public registry for the storage, management, sharing and deployment of container images, so it can now be used to host private and public container images.
"Customers want to run their workloads in containers for greater portability, more efficient resource utilisation, and lower costs, but even with these significant advantages, customers have asked AWS to make containers easier to manage, deploy, and share," said AWS vice president of compute services Deepak Singh.
"The innovations announced today further expand AWS's leading container functionality by giving customers a consistent Amazon ECS and Amazon EKS experience in the cloud and in their own data centres, making it radically simpler to develop and deploy container and serverless applications, and providing a fully managed public container registry to more easily store, manage, and share container images."
Amazon DevOps Guru (in preview) uses machine learning based on Amazon.com and AWS experience to automatically detect application operational issues and recommending specific actions for remediation.
It automatically collects and analyses data including application metrics, logs, events and traces to identify deviations from normal operating patterns. When that happens, Amazon DevOps Guru alerts developers with details of the issue using Amazon Simple Notification Service or via third-party products such as Atlassian Opsgenie and PagerDuty.
"Customers have asked us to continue adding services around areas where we can apply our own expertise on how to improve application availability and learn from the years of operational experience that we have acquired running Amazon.com," said AWS vice president of Amazon machine learning Swami Sivasubramanian.
"With Amazon DevOps Guru, we have taken our experience and built specialized machine learning models that help customers detect, troubleshoot, and prevent operational issues while providing intelligent recommendations when issues do arise. This enables teams to immediately benefit from operational best practices Amazon has learned from running Amazon.com, saving customers the time and effort that would otherwise be spent configuring and managing multiple monitoring systems."
Atlassian head of product for Opsgenie Emel Dogrusoz said "Atlassian is proud to partner with AWS on the launch of Amazon DevOps Guru and help empower teams to deploy code and operate services with confidence.
"With our new Opsgenie and Jira Service Management integration, the right teams can be immediately notified the instant Amazon DevOps Guru predicts a potential issue, or determines an incident has occurred. Amazon DevOps Guru provides a new dimension of insight, and Atlassian ensures the fastest response."
Aqua (Advanced Query Accelerator) for Amazon Redshift (in preview) adds compute to the storage layer, delivering up to 10x faster query performance than other cloud data warehouse, AWS claims.
It is a distributed and hardware-accelerated cache for Amazon Redshift that avoids the need to move data back and forth between storage and compute. Each node includes AWS-designed analytics processors to accelerate data compression, encryption, and data processing tasks like scans, aggregates, and filtering.
From January 2021, Aqua will be generally available on Redshift RA3 instances at no additional cost. No code changes are required to take advantage of Aqua.
AWS Glue Elastic Views (in preview) makes it easy to build materialised views (aka virtual tables) that automatically combine and replicate data across multiple data stores such as Amazon Aurora and Amazon DynamoDB.
It copies data from each source database to a target database, and automatically keeps the data in the target database up to date. AWS Glue Elastic Views automatically scales to accommodate changing workloads, thus keeping the target databases up to date. AWS Glue Elastic Views is available in preview today.
Amazon QuickSight Q is a machine learning-powered natural language search capability for the Amazon QuickSight BI service. It provides auto-complete suggestions with key phrases and business terms, and automatically performs spell-check and acronym and synonym matching.
This allows it to answer questions such as "how are my sales tracking against quota?" and "what are the top products sold week-over-week by region?" Amazon QuickSight Q's accuracy improves as it learns from user interactions.
"With the capabilities we're announcing today, we're delivering an order-of-magnitude performance improvement for Amazon Redshift, new flexible ways to more easily move data between data stores, and the ability for customers to ask natural language questions in their business dashboards and receive answers in seconds," said Rahul Pathak, VP, Analytics, AWS. "These capabilities will meaningfully change the speed and ease of use with which customers can get value from their data at any scale."
Five new features have been announced for the Amazon Connect contact centre service
Amazon Connect Wisdom assists agents by ingesting and organising content that agents need from databases and third-party repositories such as Salesforce and ServiceNow, and then using natural language processing to detect customer issues during the call and recommend relevant content. Wisdom is initially available as a preview.
Amazon Connect Customer Profiles helps present agents with a more unified profile of each customer, collected from databases, homegrown applications and third-party services. These connections can be custom-written for homegrown applications using Amazon Connect's SDK and APIs, while pre-built connectors to third-party applications such as Marketo, Salesforce, ServiceNow, and Zendesk are available from the Amazon Connect console.
Real-Time Contact Lens extends Contact Lens for Amazon Connect's call transcription and sentiment analysis capability into a tool that can be used while a call is in progress so managers can be alerted to issues in time to provide guidance or have the call transferred to another person.
Amazon Connect Tasks automates, tracks, and manages tasks for contact centre agents, improving agent productivity by up to 30%, according to the company. Pre-built connectors to CRM applications such as Salesforce and Zendesk are provided, along with APIs for integration with in-house applications.
Amazon Connect Voice ID (in preview) provides real-time caller authentication using machine learning-powered voice analysis, making contact centres more secure while providing a better customer experience and improving agent productivity.
"Today's five new Connect features... make it even easier for customer service agents to have the information they need to provide faster and more holistic customer experiences, optimise agents' time based on what matters most, and enable customer service managers to take action in real time to avoid contacts that will do harm to their brand," said AWS general manager of Amazon Connect Pasquale DeMaio.
Industrial machine learning
Amazon Monitron is an end-to-end machine monitoring system comprised of sensors, a gateway, and a machine learning service to detect anomalies and predict when industrial equipment will require maintenance. It can be used on a variety of rotating equipment, including bearings, motors, pumps, and conveyer belts in industrial and manufacturing settings.
Amazon Lookout for Equipment provides a way to send data from existing sensors to AWS. Customers upload their sensor data to S3, and provide the S3 location to Amazon Lookout for Equipment, which can also take data AWS IoT SiteWise and other popular machine operations systems including OSIsoft.
Amazon Lookout for Equipment analyses the data to assess normal or healthy patterns, and then uses the resulting model to analyze incoming sensor data for early warning signs of machine failure.
The AWS Panorama Appliance allows organisations to add computer vision to existing on-premises cameras. The appliance automatically identifies camera streams on the network, and is integrated with AWS machine learning services and IoT services that can be used to build custom machine learning models or ingest video for more refined analysis. These models can then be deployed at sites without connectivity.
An AWS Panorama Appliance can run computer vision models on multiple camera streams in parallel, so it can be used simultaneously for quality control, part identification, and workplace safety, for instance. Third-party pre-trained computer vision models can be employed, along with customer-developed computer vision models developed in Amazon SageMaker.
Amazon Lookout for Vision is described as a high accuracy, low-cost anomaly detection solution that uses machine learning to process thousands of images an hour to spot defects and anomalies.
Camera images can be sent to Amazon Lookout for Vision in batches or in real-time to identify anomalies such as a crack in a machine part, or an incorrect colour on a product. As few as 30 images are sufficient to establish the baseline "good" state.
Amazon Lookout for Vision runs in AWS, and some time next year will also be available on Amazon Panorama Appliances.
"Industrial and manufacturing customers are constantly under pressure from their shareholders, customers, governments, and competitors to reduce costs, improve quality, and maintain compliance. These organisations would like to use the cloud and machine learning to help them automate processes and augment human capabilities across their operations, but building these systems can be error prone, complex, time consuming, and expensive," said AWS vice president of Amazon machine learning Swami Sivasubramanian.
"We're excited to bring customers five new machine learning services purpose-built for industrial use that are easy to install, deploy, and get up and running quickly and that connect the cloud to the edge to help deliver the smart factories of the future for our industrial customers."