High Performance Distributed Deep Learning: A Beginner's Guide

A tutorial to be presented at ISCA 2021: The 48th International Symposium on Computer Architecture

When: June 18, 2021 (9 AM - 12 PM EDT)
Where: Worldwide Event

Dhabaleswar K. (DK) Panda, Hari Subramoni, and Arpan Jain

The Ohio State University


Abstract

The recent advances in Deep Learning (DL) have led to many exciting challenges and opportunities for CS and AI researchers alike. Modern DL frameworks like TensorFlow, PyTorch, and several others have emerged that offer ease of use and flexibility to train, and deploy various types of Deep Neural Networks (DNNs). In this tutorial, we will provide an overview of interesting trends in DNN design and how cutting-edge hardware architectures and high-performance interconnects are playing a key role in moving the field forward. We will also present an overview of different DNN architectures and DL frameworks. Most DL frameworks started with a single-node design. However, approaches to parallelize the process of DNN training are also being actively explored. The DL community has moved along different distributed training designs that exploit communication runtimes like gRPC, MPI, and NCCL. We highlight new challenges and opportunities for communication runtimes to exploit high-performance CPU and GPU architectures to efficiently support large-scale distributed DNN training. We also highlight some of our co-design efforts to utilize MPI for large-scale DNN training on cutting-edge CPU and GPU architectures available on modern HPC clusters. Finally, we include hands-on exercises to enable the attendees to gain first-hand experience of running distributed DNN training experiments on a modern GPU cluster.

Tutorial Objectives

Recent advancements in Artificial Intelligence (AI) have been fueled by the resurgence of Deep Neural Networks (DNNs) and various Deep Learning (DL) frameworks like Caffe, TensorFlow, and PyTorch. DNNs have found widespread applications in classical as well as emerging applications areas like Image Recognition, Speech Processing, and Autonomous Vehicle systems. Two driving elements can be attributed to the rapid growth in DL: 1) Public availability of various data sets like ImageNet and CIFAR and 2) Widespread adoption of data-parallel hardware like GPUs and accelerators for DNN training. Today, the community is designing better, bigger, and deeper networks for improving the accuracy through models like ResNet-50 and Inception v4 that require HPC systems with high-bandwidth and low-latency interconnects to scale out DNN training on hundreds of nodes efficiently. Based on these trends, this tutorial is proposed with the following objectives:

Targeted Audience

This tutorial is targeted for various categories of people working in the areas of Deep Learning and MPI-based distributed DNN training on modern HPC clusters with high-performance interconnects. Specific audience this tutorial is aimed at include:

Audience Prerequisites

There is no fixed pre-requisite. As long as the attendee has a general knowledge in HPC and Networking, he/she will be able to understand and appreciate it. The tutorial is designed in such a way that an attendee gets exposed to the topics in a smooth and progressive manner. The content level will be as follows: 50% beginner, 30% intermediate, and 20% advanced.

Detailed Outline of the Tutorial

The tutorial is organized along the following topics with a detailed time budget (half-day):

Brief Bio of Presenters

Dr. Dhabaleswar K. (DK) Panda is a Professor and University Distinguished Scholar of Computer Science at the Ohio State University. He obtained his Ph.D. in computer engineering from the University of Southern California. His research interests include parallel computer architecture, high performance computing, communication protocols, files systems, network-based computing, and Quality of Service. He has published over 500 papers in major journals and international conferences related to these research areas. Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, HSE, and RDMA over Converged Enhanced Ethernet (RoCE). His research group is currently collaborating with National Laboratories and leading InfiniBand and 10GigE/iWARP companies on designing various subsystems of next generation high-end systems. The MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) open-source software package, developed by his research group, are currently being used by more than 3,150 organizations worldwide (in 89 countries). This software has enabled several InfiniBand clusters (including the 1st one) to get into the latest TOP500 ranking. More than 1.3 million downloads of these libraries have taken place from the project's site. These software packages are also available with the stacks for network vendors (InfiniBand and iWARP), server vendors and Linux distributors. The RDMA-enabled Apache Hadoop, Spark and Memcached packages, consisting of acceleration for HDFS, MapReduce, RPC, Spark and Memcached, are publicly available from High-Performance Big Data (HiBD) project site: http://hibd.cse.ohio-state.edu. These packages are currently being used by more than 340 organizations in 38 countries. More than 39,000 downloads have taken place from the project's site. The group has also been focusing on co-designing Deep/Machine Learning Frameworks and MPI Libraries. High-performance and scalable solutions for Deep/Machine Learning are available from High-Performance Deep Learning (HiDL) Project site: site: http://hidl.cse.ohio-state.edu. Dr. Panda's research is supported by funding from US National Science Foundation, US Department of Energy, and several industry including Intel, Cisco, SUN, Mellanox, Microsoft, QLogic, NVIDIA and NetApp. He is an IEEE Fellow and a member of ACM. More details about Dr. Panda, including a comprehensive CV and publications are available here.

Dr. Hari Subramoni is a research scientist in the Department of Computer Science and Engineering at the Ohio State University, USA, since September 2015. His current research interests include high performance interconnects and protocols, parallel computer architecture, network-based computing, exascale computing, network topology aware computing, QoS, power-aware LAN-WAN communication, fault tolerance, virtualization, big data and cloud computing. He has published over 70 papers in international journals and conferences related to these research areas. He has been actively involved in various professional activities in academic journals and conferences. Dr. Subramoni is doing research on the design and development of MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) and MVAPICH2-X (Hybrid MPI and PGAS (OpenSHMEM, UPC and CAF)) software packages. He is a member of IEEE. More details about Dr. Subramoni are available here.

Arpan Jain received his B.Tech. and M.Tech. degrees in Information Technology from ABV-IIITM, India. Currently, Arpan is working towards his Ph.D. degree in Computer Science and Engineering at The Ohio State University. His current research focus lies at the intersection of High Performance Computing (HPC) libraries and Deep Learning (DL) frameworks. He is working on parallelization and distribution strategies for large-scale Deep Neural Network (DNN) training. He previously worked on speech analysis, time series modeling, hyperparameter optimization, and object recognition. He actively contributes to projects like HiDL (high-performance deep learning), MVAPICH2-GDR software, and LBANN deep learning framework.


Last Updated: June, 10, 2021