Robotics & Automation News

Market trends and business perspectives

summit supercomputer

World’s fastest supercomputers: US takes the lead again in high-performance computing

The US has again taken the lead in the global supercomputing sector because of a large investment by the Department of Energy, which made $260 million available to fund the development of the Summit supercomputer, located at the Oak Ridge National Laboratory.

According to the Top500.org, which measures these things, the Summit supercomputing system is capable of 122,300 teraflops a second.

A teraflop is a trillion floating point operations. A petaflop is 1,000 times a teraflop, or a quadrillion flops. 

Until Summit came along, the fastest computer in the world was China’s Sunway TaihuLight, which can perform at a speed of 93,014 teraflops. 

But although TaihuLight was in the top spot, the US had a strong presence in the top 10 list published by Top500.

The second-fastest supercomputer in the US is called Sierra, which can operate at 71,610 teraflops.

Both the US supercomputers mentioned above – Summit and Sierra – are IBM systems.

Another US supercomputer in the top 10 is called Titan, and that is a Cray XK7 system, based on Opteron processors which have speeds of 2.2 GHz, and Nvidia K20X graphics processing units, which are designed for servers.

Titan is also owned by the DoE, and is currently ranked the seventh-fastest supercomputer in the world, according to Top500.org.

The top 10 for June, 2018 is given below, and it’s interesting to note that Summit uses about one-fifth of the number of cores that TaihuLight does, and consequently a lot less power – around half.

The top 10 fastest supercomputers in the world, according to the latest list published by Top500.org, are:

  1. Summit (USA) –> 2.3 million cores; ~ 122,000 teraflops
  2. TaihuLight (China) –> 10 million cores; ~ 93,000 teraflops
  3. Sierra (USA) –> 1.6 million cores; ~ 72,000 teraflops
  4. Tianhe-2A (China) –> 5 million cores; ~ 61,000 teraflops
  5. AI Bridging Cloud Infrastructure (Japan) –> 392,000 cores; ~ 20,000 teraflops
  6. Piz Daint (Switzerland) –> 392,000 cores; ~ 20,000 teraflops
  7. Titan (USA) – > 561,000 cores; ~ 18,000 teraflops
  8. Sequoia (USA) –> 1.6 million cores; ~ 17,000 teraflops
  9. Trinity (USA) –> 980,000 cores; ~ 14,000 teraflops
  10. Cori (USA) –> 622,000 cores; ~ 14,000 teraflops

In the past, designing and building supercomputers may have been a mainly academic activity – both in terms of what type of people work on them as well as the type of applications they have.

Often they’d have applications in government, whether it’s atmospheric sciences to analyze and predict the weather, or in the military.

Now, however, with the expansion of the internet and world wide web, such colossal computing power is increasingly being seen as the only way to cope with the massive and accelerating increase in both the volume of data around today, as well as its processing.

Supercomputers have also long been used in the commercial sector, in such industries as automaking, aerospace, and energy.

Mainly they’re used to analyze vast quantities of data and make predictions based on that data.

And it seems it’s not a fringe activity either. According to a study by supercomputing experts Earl Joseph and Steve Conway, “97 percent of companies that had adopted supercomputing say that could no longer compete or survive without it”.

So the supercomputing industry, such as it is, could be said to be expanding into new sectors and is in increasing demand.

Meanwhile, however, a competing computing paradigm is emerging: quantum computing is said to hold the promise of even faster and more powerful systems.

But while most people can just about understand how modern conventional computer components work, with their binary transistors at the lowest level, nobody has any idea what quantum computing is.