Workshop on Communication Architecture for Clusters

CAC '02

To be held in Conjunction with
Int'l Parallel and Distributed Processing Symposium (IPDPS '02)
Fort Lauderdale, Florida, April 15, 2002

Papers presented in this workshop were published by IEEE Computer Society Press as a part of the IPDPS '02 workshop proceedings . The papers are available under Workshop 9.

Advance Program

April 15 (Monday)

8:00 -  8:15 Welcome and Workshop Introduction

8:15 -  9:15 Keynote Talk

Anthony Skjellum, Mississippi State University and MPI SoftTech
Title: Explicit Parallel Programming with `Message Passing Interfaces': Legacy, Longevity, Optimizability, Evolvability

9:15 -  10:30 Session I

Routing and Switching
Session Chair: Scott Pakin, Los Alamos National Laboratory

Each presentation is for 20 minutes. The last 15 minutes of the session is for discussion.

10:30 -  11:00 Break

11:00 -  12:15 Session II

Remote Memory Communication
Session Chair: Fabrizio Petrini, Los Alamos National Laboratory

Each presentation is for 20 minutes. The last 15 minutes of the session is for discussion.

12:15 -  1:45 Lunch (On your own)

 1:45 -  3:00 Session III

I/O and NIC Support
Session Chair: Jarek Nieplocha, Pacific Northwest National Laboratory

Each presentation is for 20 minutes. The last 15 minutes of the session is for discussion.

3:00 - 3:30 Break

3:30 - 4:20 Session IV

Session Chair: Olav Lysne, Univ. of Oslo, Norway

Each presentation is for 20 minutes. The last 10 minutes of the session is for discussion.

 4:20 -  4:30 Short Break

 4:30 -  6:00 Panel Session

Title: Cluster Interconnects Crystal Ball: which will win in 2006?

Description: The architectures and technologies used for cluster interconnection are in continual flux. Recently a number of emerging technologies have begun vying for a spot at the cluster interconnect feast. Infiniband specifies new technology at many transport levels, including a new user interface. The advantages of SCSI and Ethernet are being combined in the iSCSI protocol. Optics has begun to play a role in clusters. The latencies of some interconnects are getting low enough to consider support of NUMA traffic.

Today's gigabit and multi-gigabit Ethernet and Fibre Channel solutions give these technologies an evolutionary advantage over game-changing interconnects. Within the server, PCI-X and 3GIO are the anointed heirs to PCI. On the high end, proprietary interconnects such as Myrinet, GigaNet, and Quadrics are often employed, especially for supercomputer clusters.

Given this current plethora of cluster interconnects, what technologies---or combinations of such interconnects---are likely to be used in clusters in the 2006 time frame? And why? If you wish, you may create separate crystal-ball readings for low-end, mid-range, high-end, and supercomputer clusters. Or you may categorize your predictions according to other market delimiters, for instance based upon application.

Moderator: Craig Stunkel, IBM T.J. Watson Research Center


David Addison, Quadrics
Kevin Deierling, Mellanox
Patrick Geoffray, Myricom
Shubu Mukherjee, Intel
Renato J. Recio, IBM

 6:00 - Adjourn

Registration and Hotel information

Workshop registration is handled by the IPDPS '02 conference. There is a single registration for the conference and all of its 18 workshops. Please visit IPDPS'02 web page for registration and hotel information.

Deadline for advance registration is March 25, 2002.

Call For Papers


The availability of commodity PCs/workstations and high-speed networks at low prices enabled the development of low-cost clusters. These clusters are being targeted for support of traditional high-end computing applications as well as emerging applications, especially those requiring high-performance servers. Designing high-performance and scalable clusters for these emerging applications requires design and development of high-performance communication systems, low-overhead programming environment support and support for Quality of Service (QoS). New user-level communication protocol standards such as Virtual Interface Architecture (VIA) and InfiniBand Architecture (IBA) are providing exciting ways to design high-performance communication architectures for clusters.

A large number of research groups from academia, industry, and research labs are currently engaged in the above research directions. The goal of this workshop is to bring together researchers and practitioners working in the areas of communication and architecture to discuss state-of-the-art solutions as well as future trends for designing scalable, high-performance, and cost-effective communication architectures for clusters.

The first workshop in this series (CAC '01) was held in conjunction with IPDPS '01 conference and it was very successful. CAC '02 workshop plans to continue the tradition of CAC '01.


Topics of interest for the workshop include but are not limited to:
  1. Router/switch, network, and network-interface architecture for supporting efficient point-to-point and collective communication at intra-cluster and inter-cluster levels.
  2. Design, development, and implementation of user-level communication protocols (GM, VIA, etc) on different networking and interconnect technologies (such as Myrinet, Gigabit Ethernet, ATM, Infiniband, etc.).
  3. High-performance implementation of different programming layers (Message Passing Interface (MPI), Distributed Shared Memory such as TreadMarks, Get/Put, Global Arrays, sockets, etc.).
  4. Communication and architectural issues related to flow control, management of communication resources, deadlock-handling, reliability, and QoS.
Results of both theoretical and practical significance will be considered.


The proceedings of this workshop will be published together with the proceedings of other IPDPS '02 workshops by the IEEE Computer Society Press.


We are planning a purely electronic submission and review process. Authors are requested to submit papers (in PDF format) not exceeding 10 single-spaced pages, including abstract, five key words, contact address, figures, and references. E-mail your manuscripts to:

Note: the PDF file must be viewable using the ``acroread'' tool. It is also important, when creating your PDF file, to use a page size of 8.5x11 inches (LETTER sized output not A4), since an A4 sized page may be truncated on a LETTER sized printer.


Title and Abstract:         October 29, 2001 
Full Paper submission:      Nov. 5, 2001 
Notification of acceptance: December 10, 2001 
Camera-ready due:           January 21, 2002 


Dhabaleswar K. Panda (Ohio State), Jose Duato (Univ. of Valencia, Spain), and Craig Stunkel (IBM TJ Watson Research Center)



For further questions, send e-mail to

Last updated Jan 2, 2002