Cisco Systems Inc. today became the latest data center equipment maker to introduce a system optimized for artificial intelligence.
The company has pulled back the curtains on the UCS C480 ML M5, a new four-rack-unit server specifically built to run processor-intensive deep learning workloads. The machine joins Cisco’s Unified Computing System family. UCS servers combine compute resources with networking and storage capabilities, plus management automation software.
The C480 ML M5 packs a bigger punch than many of the other systems in the series. It pairs two latest-generation Intel Corp. Xeon Scalable central processing units with eight of Nvidia Corp.’s Tesla V100 graphics cards. Cisco has opted to go with the top-end version of the chip, the SMX2, which packs 32 gigabytes of onboard memory.
According to official Nvidia figures, a single V100 provides 47 times the performance of a traditional CPU for deep learning workloads. The chip packs 21.7 billion transistors on a die the size of an Apple Watch face. Those transitions are organized into 5,700 processing cores, including 640 so-called Tensor Cores specifically engineered with AI in mind.
Inside Cisco’s new appliance, the V100 chips communicate with one using a technology called NVLink that Nvidia has developed specifically for such systems. When it comes to storage, in turn, organizations can equip the system with up to 24 direct-attached disk or flash drive. Six of the C480 ML M5s drive slots support flash devices based on the high-speed NVMe interconnect technology.
The systems offer the potential for Cisco, whose business has been under fire not only from hardware providers such as Dell Technologies Inc. and Hewlett Packard Enterprise Co. but also cloud providers such as Amazon Web Services Inc. and Microsoft Corp., to offer a more competitive range of data center products, said Chirag Dekate, a research director at Gartner Inc. He said no other hardware provider yet provides eight-GPU server box.
“For Cisco, it minimizes data center churn, allowing Cisco to continue driving users to its platform,” he said. Likewise, the new systems give Cisco’s customers “the ability to do deep learning without introducing new diversity into their data centers.” Although many enterprises at least start their AI projects on public cloud infrastructure, he said, most move significant projects to their own, more cost-effective infrastructure, such as Cisco’s.
Cisco wants to ensure that the appliance will work with companies’ preferred AI tools. To this end, the networking giant is collaborating with Hortonworks Inc. to validate the machine for the latest 3.1 release of the Hadoop analytics platform. The version provides support for popular deep learning frameworks such as TensorFlow.
Cisco is also promising support for Kubeflow, an open-source tool that makes TensorFlow compatible with the Kubernetes software container orchestration engine. Software containers enable companies to easily move workloads across different types of environments. According to Cisco, this means that customers will have the ability to migrate on-premises AI models to the cloud and vice versa.
All that’s especially important, Dekate said, because one advantage Cisco has is a deeper ecosystem of software providers and other partners than some rivals. “It’s about delivering a hardware-software solution, not just hardware,” he said.
The introduction of the C480 ML M5 comes a month after Dell EMC debuted a server likewise aimed at AI workloads that can be equipped with up to four V100 chips. Earlier, Pure Storage Inc. unveiled AIRI, a system that combines five Nvidia DGX-1 servers featuring eight V100s each.
Cisco will make C480 ML M5 generally available later this year.