AWS broadens its lineup of GPU instances with the new Nvidia Tesla M60 based G3 family.
Amazon Web Services (AWS) has launched a new family of high performance Nvidia-based GPU instances.
The new “G3” instances are powered by Nvidia’s Tesla M60 GPUs, and succeed its former G2 instance, which had four four NVIDIA Grid GPUs and 1,536 CUDA cores.
As with G2, which launched in 2013, the new G3 instances are targeting applications that need huge parallel processing power, such as 3D rendering and visualization, virtual reality, video encoding, remote graphics workstation applications.
AWS is offering three flavors of the G3 instance, with one, two, or for GPUs. Each GPU has 8 GB of GPU memory, 2048 parallel processing cores, and a hardware encoder that supports up to 10 H.265 streams and 18 H.264 streams.
AWS notes that the G3 instances support Nvidia’s GRID Virtual Workstation, and are capable of supporting four 4K monitors.
AWS claims the largest G3 instance, the g3.16large, has twice the CPU power and eight times the host memory of its G2 instances. It has four GPUs, 64 CPUs, 488GB of RAM.
The virtual CPUs use Intel’s Xeon E5-2686v4 (Broadwell) processors. Its largest G2 instance featured 60GB RAM.
On-demand pricing for the G3 instances are $1.14 per hour for g3.4xlarge, $2.28 per hour for the g3.8xlarge, and $4.56 per hour for the g3.16xlarge. The instances are available only with AWS Elastic Block Storage, compared with the G2 instances, which are available with SSD storage.
The G3 instances are available in US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), AWS GovCloud (US), and EU (Ireland). AWS is planning to expand the offering to more regions in the coming months.
AWS has continued to broaden its lineup of GPU instances over the years. Back in 2013 it was pitching the G2 family for machine learning and molecular modeling, but these applications are now catered to with its P2 instances, which it launched in September.
The largest P2 instance offers 16 GPUs with a combined 192GB of video memory. They also feature up to 732 GB of host memory, and up to 64 vCPUs using custom Intel Xeon E5-2686 v4 Broadwell processors.
“Today, AWS provides the broadest range of cloud instance types to support a wide variety of workloads. Customers have told us that having the ability to choose the right instance for the right workload enables them to operate more efficiently and go to market faster, which is why we continue to innovate to better support any workload,” said Matt Garman, Amazon EC2 vice president.
Microsoft has also been beefing up on GPU instances for Azure customers. The company launched its NC-Series compute-focused GPU instances last year, offering up to four Nvidia Tesla M60 GPUs and 244GB RAM with 24 cores using Intel Xeon E5-2690 v3 (Sandy Bridge) processors.
In May it announced the forthcoming ND-series which use Nvidia Pascal-based Tesla P40 GPUs and an updated lineup of NC-series instances. The largest ND-series features 24 CPUs, four P40 GPUs, and 448GB RAM. The largest NC-series, the NC24rs_v2 features 24 CPUs, four Tesla P100 GPUs, and 448 GB RAM