The MLPerf benchmark suite has the backing of AMD, Arm, Baidu, Google, Intel, Microsoft, Nvidia and other technology leaders.
It covers various workloads including computer vision, language translation, personalised recommendations, and reinforcement learning tasks.
Nvidia submitted six benchmark results — image classification, object instance segmentation, object detection, non-recurrent translation, recurrent translation, and recommendation systems — running on configurations ranging from 16 GPUs on one node to 640 GPUs across 80 nodes, and achieved the fastest performance on all six.
Nvidia was the only company to enter six benchmarks.
The seventh category — reinforcement learning — does not yet take advantage of GPU acceleration.
Nvidia's benchmark results were achieved on DGX systems, including the DGX-2 which is said to be the world's most powerful AI system, with 16 fully connected V100 Tensor Core GPUs.
"The new MLPerf benchmarks demonstrate the unmatched performance and versatility of NVIDIA's Tensor Core GPUs," said Nvidia vice-president and general manager of accelerated computing, Ian Buck.
"Exceptionally affordable and available in every geography from every cloud service provider and every computer maker, our Tensor Core GPUs are helping developers around the world advance AI at every stage of development."
The 18.11 release of the deep learning containers available from the Nvidia GPU Cloud registry include the exact software used to achieve the MLPerf results.
The containers include Nvidia's complete software stack along with Nvidia-optimised versions of the top AI frameworks.