SC2003 Bandwidth Challenge Contestants

Bandwidth Challenge Teams Push Networking Performance Envelope at SC2003 Conference – Sustained 23 Gigabits per Second Sets New Record

PHOENIX, Ariz. – Teams of scientists from research organizations around the world competed recently in Phoenix to see who could move the most scientific data across networks in the fourth annual High-Performance Bandwidth Challenge, held in conjunction with SC2003, the international conference on high-performance computing and networking.

Once the data were moved and the performance tracked, a team representing the Stanford Linear Accelerator Center, Caltech and Los Alamos National Laboratory had achieved a new record from sustained throughput – 23.21 gigabits per second, surpassing last year’s record by about 5 Gbits.

“While the results are impressive, the challenge is not just about blasting bits across the network,” said 03 Bandwidth Challenge Co-Chair Kevin Walsh of the San Diego Supercomputer Center. “It’s really about driving science and this year’s competition clearly illustrates the role of high performance, high bandwidth networks in current scientific research in such areas as physics, biology and chemistry, as well as computer science.”

Walsh added that cutting-edge science carried out on an international scale is pushing currently available bandwidth, and projections are that Grid computing advances will grow in tandem with increases in high performance, high bandwidth networks.

For the fourth consecutive year, Qwest Communications sponsored prizes for the winning teams.

“Qwest is once again extremely pleased to sponsor the SC conference's Bandwith Challenge,” said Dr. Wesley K. Kaplow, chief technology officer for Qwest Government Services. “This year's set of participants have clearly demonstrated that high-performance computing coupled with high-bandwidth networking is the foundation for igniting international innovation and collaboration.”

This year's winners:

Sustained Bandwidth Award: “Bandwidth Lust: Distributed Particle Physics Analysis Using Ultra-High Speed TCP on The Grid.” In what judges called the “Moore's Law move over" award, the team demonstrated the best vision and articulation of the need for high performance networks to serve science. The team moved a total of 6551.134 gigabits of data, reaching 23.23 gigabits per second. Team members are Harvey Newman, Julian Bunn, Sylvain Ravot, Conrad Steenberg, Yang Xia, Dan Nae, Caltech; Les Cottrell, Gary Buhrmaster, SLAC; Wu-chun Feng, LANL; Olivier Martin, CERN/DataTAG.

Tools Award: “High Performance Grid-Enabled Data Movement with GridFTP,” which emphasized creating common, standards-based tools that are the building blocks for new applications, and demonstrating it capability with visualization. Sustained high rate was 8.94 Gbits per second. Team members are William E. Allcock, John M. Bresnahan, Ian Foster, Rajkumar Kettimuthu, Joseph M. Link and Michael E. Link, all of Argonne National Laboratory; and Phil Andrews, Bryan Banister, Haisong Cai, Steve Cutchin, Jay Dombrowski, Patricia Kovatch, Martin W. Margo, Nathaniel Mendoza, Michael Packard, Don Thorp, all of San Diego Supercomputer Center (SDSC).

Application Foundation Award: “DataSpace,” which used a Web service framework integrated with high-performance networking tools to provide an application foundation for the use of distributed datasets. High sustained rate was 3.66 gigabits per second. Team members are Robert L. Grossman, Yunhong Gu, David Hanley, Xinwei Hong, Michal Sabala, University of Illinois at Chicago; Joe Mambretti, Northwestern University; Cees de Laat, Freek Dijkstra, Hans Blom, University of Amsterdam; Dennis Paus, SURFNet; Alex Szalay, John Hopkins University; and Nagiza F. Samatova and Guru Kora, Oak Ridge National Laboratory]

Application Award: “Multi-Continental Telescience,” which emphasized user interaction with science instruments, distributed collaboration, with particular attention to ease of use by domain scientists. The team posted a sustained rate of 1.13 gigabits per second. Team members are Steve Peltier, Abel Lin, David Lee, UCSD BIRN; Francisco Capani, Universidad de Buenos Aires; Oleg Shupliakov, Karolinska Institute; Shimojo Shinji, Tokokazu Akiyama, Osaka University; H. Mori, Center for Ultra High Voltage Microscopy; KDDI R&D Labs; Fang-pang Lin, NCHC; Tom Hutton, SDSC.

Distance x Bandwidth Product & Network Technology Award: “Transmission Rate Controlled TCP on Data Reservoir, University of Tokyo,” which demonstrated attention to the details of controlling multiple gigabit streams fairly over extremely long distances. Achieved very high average pipe utilization of over 65 percent with real disk-to-disk transfer with a high sustained rate of 7.56 gigabits per second. Team members are Mary Inaba, Makoto Nakamura, Hiroaki Kamesawa, Junji Tamatsukuri, Nao Aoshima, Kei Hiraki, University of Tokyo; Akira Jinzaki, Junichiro Shitami, Osamu Shimokuni, Jun Kawai, Toshihide Tsuzuki, Masanori Naganuma, Fujitsu Laboratories; Ryutaro Kurusu, Masakazu Sakamoto, Yuuki Furukawa, Yukichi Ikuta, Fujitsu Computer Technologies]

Commercial Tools Award: “On-Demand File Access over a Wide Area with GPFS,” showing emergence and use of commercial system that demonstrates high-performance without significant impact on remote systems. The team posted a sustained rate of 8.96 gigabits per second. Team members are Phil Andrews, Bryan Banister, Haisong Cai, Steve Cutchin, Jay Dombrowski, Patricia Kovatch, Martin W. Margo, Nathaniel Mendoza, Michael Packard, Don Thorp, SDSC; Roger Haskin and Puneet Chaudhary, IBM.

Distributed Infrastructure Award: “Trans-Pacific Grid Datafarm,” a geographically distributed file system which took advantage of multiple physical paths to achieve high performance over long distances. The team achieved a high rate of 3.57 gigabits per second. Team members are Osamu Tatebe, Hirotaka Ogawa, Yuetsu Kodama, Tomohiro Kudoh, Satoshi Sekiguchi, AIST; Satoshi Matsuoka, Kento Aida, Tokyo Institute of Technology; Taisuke Boku, Mitsuhisa Sato, University of Tsukuba; Youhei Morita, KEK; Yoshinori Kitatsuji, APAN Tokyo XP; Jim Williams, John Hicks, TransPAC/Indiana University.

Both Directions Award: “Distributed Lustre File System Demonstration,” which proved that not all applications or bandwidth challenge entries move data only in one direction. The team achieved a rate of 9.02 Gbits per second. Team members are Peter Braam, Eric Barton, Jacob Berkma, Radika Vullikanti, Cluster File Systems; Hermann Von Drateln, Acme Microsystems: Nic Huang, Supermicron; Danny Caballes, John Szewc, Mike Allen, Rick Crowell, Matt Eclavea, Foundry Networks; Dave Fellinger, Ryan Weiss, John Josephakis, Data DirectNetworks; Jeff James, Matt Baker, Intel; Leonid Grossman, Marc Kimball, S2io; Vicki Williams, Luis Martinez, Sandia National Laboratories; Parks Fields, Los Alamos National Laboratory; Rob Pennington, Michelle Butler, Tony Rimovsky, Patrick Dorn, Anthony Tong, National Center for Supercomputing Applications; Phil Andrews, Patricia Kovatch, Kevin Walsh, San Diego Supercomputer Center; Alane Alchorn, Jean Shuler, Keith Fitzgerald, Dave Wiltzius, Bill Boas, Pam Hamilton, Chris Morrone, Jason King, Danny Auble, Jeff Cunningham, Wayne Butman, Lawrence Livermore National Laboratory.

Kaplow said that participants this year focused more on data storage and movement than in years past, and there have been significant increases in their capability - especially in the face of problems caused by large geographic distances.

“Next year, we are going to place additional emphasis on applications that use these facilities,” Kaplow said. “Also, we have seen an increase in the use of commercial and standards-based middleware to enable application development which is key to enabling application writers to focus on their user requirements and less on how to push gigabits across kilometers.”

A graphical representation of each team’s effort, along with detailed statistics on the amount of data transferred, can be found at

SC2003 is sponsored by the Institute of Electrical and Electronics Engineers Computer Society and by the Association for Computing Machinery's Special Interest Group on Computer Architecture. For more information, please see