Follow BigDATAwire:

November 15, 2011

Live from SC11: HPC and Big Data Get Hitched in Seattle

Nicole Hemsoth

Greetings from Seattle, the location for SC11, the annual must-attend event for the high performance computing community .

Over ten thousand supercomputing enthusiasts, professionals, and scientific and technical computing users have descended on the city to share their insights in a range of sessions and events that spread across the entirety of the week.

In the wake of yesterday’s announcement about the Top500 list of the fastest supercomputers on earth and presentations galore from vendors that are more associated with their role in government and national lab cluster construction, one might wonder where the big data piece fits into the puzzle.

This year the supercomputing community is turning its sights on the big data phenomenon, pointing inward to examine how those in the realm of HPC are addressing the challenges of massive data sets—and the I/O, programming, compute and other problems that stand in the way of efficient, scalable high-end computing.

In many ways, this has become the big data event of the year in its own right. After all, if there is one community in IT that is used to tackling incredibly complex, data-intensive problems that involve datasets that are mind boggling in their scope, size and complexity it is the supercomputing crew.

According to Scott Lathrop, general chair of the conference from the National Center for Supercomputing Applications, the focus on data-intensive computing in science for this year’s event was due to its importance across the community as new challenges, frameworks, and hardware or software tools emerge to negotiate massive amounts of scientific data.

Lathrop says that in selecting the thrust of this year’s event, the “cross-cutting, driving issue that affects the majority of the community that comes to a conference like SC11” is data. According to Lathrop, this is true across “almost every aspect of science, engineering and scholarly research—and that data is becoming a cornerstone of a lot of this work.”

While the almighty CPU and performance benchmarks are still king here this week, announcements like today’s Graph500 are being seen as increasingly important for their real world value and sessions on big data problems are finding standing-room only audiences.

According to Jim Costa, Technical Program co-chair from Sandia National Laboratories, “there are a lot of times where we go and talk about what benchmarks people have hit, what we’re doing in terms of pushing the technical edge. We’re still doing that, but in the end it’s about what can you do with your equipment—what are you really going to use it for?”

Based on conversations on the first night around the show floor here, it is clear the visions of exascale computing—the peak of HPC innovation—are becoming secondary to some of the larger data management, application, programming, I/O and efficiency questions. In a discussion with one of the lead researchers at a major DOE lab, exascale is the dream, but solving data-intensive problems are required first.

We will be on hand all week. If you’re here, please do stop by the HPCwire booth on the 6th floor to talk with us anytime.

BigDATAwire