How supercomputers found their industry mojo – the evolution of high performance computing – Diginomica

Posted: January 21, 2021 at 3:09 pm

Are supercomputers computers? No, not really. The essence of supercomputers is that they are not computers at all.They are scientific instruments of discovery- or strategic business facilities - they justhappen to be made from computer technology. Lots of it.

Formally the domain of scientific research and defense, supercomputers are finding relevance in commercial operations.

Here are some examples: seismic processing in the oil industry is an application where High Performance Computing (HPC) prevails since most of the computation is of an explicit nature and done on structured grids. Similarly, both electro-magnetics applications using methods of moments algorithms, and signal processing applications account for the use of HPC in the aerospace community - for example, making the 787 lighter.

But broader applications can be found in engineering, product design, complex supply chain optimization (actually, almost any kind of optimization), Bitcoin mining (which you could do on a PC, but has gotten very complex). World's largest glass company, AGC, runs a constant stream of simulations that rely on simulation-driven product development, For the City of Chicago: "The Array of Things Project," 5000 sensors on lampposts can't get the data to the data center for real-time modeling, so they built sensors that do computation and act as a distributed supercomputer. Expect to see many more implementations like these of "Things." Via the supercomputer atLawrence Livermore Laboratory, researchers found that trucks should wear skirts. At Trek Bicycle, bicycles are streamlined from every angle, in draft situations using time-shared HPC.

In a gradual move from supercomputers locked away doing math, the newest ones have theflexibility to handle AI, Analytics, and other general HPC workloads for Big Data, Data Science, Convergence. Innovation. Visualization, Simulation, and Modeling.

HPE announced a program recently called GreenLake, which I'll cover in a future article. I'm intrigued by their use of the term HPC. HPE, since its acquisition of Cray, Inc., in 2019, became the top dog in the highest end of supercomputing, code-named exascale. Exascale means the ability to produce one or multiples of a quintillion double-precision floating-point calculations per second (an exaFLOP). These machines, Aurora (1 exaFLOP, 2021), Frontier (1.5 exaFLOP, 2021), and El Capitan (2 exaFLOP, 2023), are being assembled now.

Keep in mind that the monsters' price tag is >$500 million, and that doesn't include the cost of the facilities to house them, the massive cooling systems, and the 30-40MW power supply (and bill). This is one reason why you can't install one just anywhere. They need a trunk line with enough power to run a small city. Expect next-generation supercomputers to drastically reduce the energy and cooling requirements over the next ten years.

An quote so often repeated that can't be attributed is: "An exaFLOP is one quintillion (1018) double-precision floating-point operations per second or 1,000 petaFLOP. To match what a one exaFLOP computer system can do in just one second, you'd have to perform one calculation every second for 31,688,765,000 years."

They were all slated to replace the two current speedsters; the IBM machines Summit (200petaFLOP, 2018) and Sierra (125 petaFLOP, 2018). So until the HPE/Cray machines come online, Summit and Sierra are still tops. Or so we thought.

Fujitsu surprised everyone in 2020 with the Fugaku machine, operating at a surprising ~475 petaFLOP to take the #1 spot. But rumors are that's just the beginning for Fugaku because they designed everything from the ground up - chips, interconnect, and even software. Then HPE/Cray announced they would bring a 500 petaFLOP computer to Finland in 2021. That will theoretically place it at #1 unless Aurora or Frontier become operational first.

The TOP500lists the fastest 500 supercomputers in the world. Five hundred. Not a typo. My alma mater, for supercomputing, Sandia National Labs, actually has the 486th slowest one (they have others), but just to make a list, it has to perform >2 petaFLOPS. Just to put that in perspective, the 486th slowest supercomputer in the world can do in one second, what you'd have to perform one calculation every second for a mere 63,377,530 years! When I worked on ASCI Red's design in 1997, we brought the first teraFLOP computer up. That would make it one million times slower than the exascale computers coming up.

It's amazing that one man invented the supercomputer. Seymour Cray formed Cray Research in 1972 and produced the Cray-1 supercomputer. It was the world's fastest supercomputer from 1976 to 1982. It measured 8 feet wide by 6 feet high and contained 60 miles of wires. It was pretty, too. By comparison, today's exascale monsters take up the space of two football fields and run a few billion times faster.

The first customer was the Los Alamos National Lab. In 1993, Cray produced its first massively parallel computing supercomputer, the T3D. Supercomputers took off when they switched to, effectively, large arrays of identical servers spurred on by multi-core chips.

So, in a way, all computers are supercomputers now.

Sadly, Cray died in an automobile accident in 1996, and the company was sold to Silicon Graphics, which later merged with Tera Computer Company in 2000. That same year, Tera re-named itself Cray, Inc. In a brilliant move (in my opinion, though not the analysts at the time), HPE acquired Cay, Inc. HPE incorporated Cray technology to release the HPE Cray EX Shasta supercomputer built for the exascale era workloads, a smooth merger of technologies. It can support converged workloads, eliminates the distinction between supercomputers and clusters, combining HPC and AI workloads.

Central to its design is the Slingshot interconnect backbone. The U.S.'s first three exascale supercomputers are all Shasta systems. At the moment, the HPE/Cray is on track to provide three (or more) of the four fastest supercomputers in the world (Actually, Frontier was a Cray project before HPE acquired Cray, but they have blended their technology smoothly). Next time, I'll detail GreenLake and how HPE is offering HPC capabilities to all organizations' sizes.

One question is: do they process data differently than a commercial MPP arrangement?

The one enduring constant is that using an HPC machine requires the programmer to think very differently about how their problem should be attacked. Today's supercomputers are, at a certain level, the same as MPP clusters that commercial databases like Oracle, Teradata, Vertica, IBM, etc., consist of. Both employ an MPP "shared nothing" setup, where each server is independent except for a networking system. Each server is composed of its own processors, memory, and, sometimes, storage, as well as a copy of the operating system. Where they differ is that the supercomputers in production today are dramatically larger. IBM's Sierra: combines commercial CPUs and Nvidia GPU's in 4,320 nodes, with total cores pf 190,080 and 256Gb/CPU memory + 64GB memory per GPU. Commercial database installations are a fraction of its size.

Commercial MPP can't process 2.56 quadrillion double-precision floating-point calculations per second on a Dell chassis with at most 3000 cores. But the real difference is in what they do. An MPP database may handle hundreds or thousands of queries per minute and do optimization, load balancing, and workload management. The queries posed are themselves simple compared to modeling climate change. A supercomputer cannot handle that kind of concurrency when each program may involve billions of calculations.

Currently, programming supercomputers is mostly done in Fortran, C, or C++. As opposed to operational, transactional, and analytical programs, these "codes," as they call them, simple in comparison, but the configuration is the tricky part.

Except for "air-gapped" installations, where the entire arrangement is disconnected from the outside world, most supercomputers are "shared" but not at all like cloud sharing. Hundreds of users around the world may use a supercomputer, but they're not interactive. Programs are run as jobs and are submitted to a queue as part of a specific grant funded by research organizations. Grants are in the order of thousands or millions of CPU hours. They are not open to the public. You are charged per CPU hour, times the queue cost (determined by queue specs and priority). If you have an 11 CPU program running 1 hour, you consume eleven service units multiplied by the queue cost).

Cloud providers support many architectures, so it is conceivable that cloud providers can provide HPC and actual supercomputer time. Advantages of the cloud are pay-by-the-sip, distributed, and multi-tenant. I'll be looking for clarification from HPE for the next article. For now, as I understand it, petascale and exascale supercomputing is not part of GreenLake at the moment and the program is broader than just HPC.

See the rest here:

How supercomputers found their industry mojo - the evolution of high performance computing - Diginomica

Related Posts