Nvidia has added nine new GPU-charged supercomputing containers to its cloud service.
It has expanded its Nvidia GPU Cloud (NGC) to now include 35 containers, and has more than tripled the amount on its platform, which launched last year.
They’re aimed at hardcore engineers running huge workloads that use machine learning to crunch through mathematical operations for training models or running simulations. Developers can write their programmes in their framework of choice and then deploy the model on the shared clusters, so their models can run faster.
NCG also comes with different packages that are suited to various applications. “The container for PGI compilers available on NGC will help developers build HPC applications targeting multicore CPUs and Nvidia Tesla GPUs. PGI compilers and tools enable development of performance-portable HPC applications using OpenACC, OpenMP and CUDA Fortran parallel programming,” Nvidia explained in a blog post.
Some, like CHROMA, are for optimizing maths and physics models, AMBER is for molecular simulations, and CANDLE is for cancer research. There are apparently over 27,000 users registered to NGC’s container registry.
“Since November’s Supercomputing Conference, nine new HPC and Visualization containers — including CHROMA, CANDLE, PGI and VMD — have been added to NGC. This is in addition to eight containers including NAMD, GROMACS, and ParaView launched at the previous year’s Supercomputing Conference,” it said.
Containers make it easier for developers as they don’t need to install all the necessary different libraries and back-end programmes to deploy their models. It’s also pretty good for testing models as researchers can run their experiments on different systems and check if they get the same results for their simulations.
They’re also tested using different GPU-powered workstations like Nvidia’s DGX’s, and other cloud platforms like Amazon Web Services, Google Cloud, and Oracle Cloud Infrastructure. ®