RedNeurons Launches Peta-scale Embedded HPC Platform

RedNeurons (Shanghai) Information Technology Co., Ltd. announced the completion of the Tensor MPU2016 High Performance Embedded HPC technology demonstration and development platform. RedNeurons’ technology plan is designed to increase computing density in energy and space and to reduce the cost per Giga-flop/sec for the fastest supercomputers, currently ranging between 500 to 1500 US dollars, to less than half this amount before the end of 2009.

“This product is a key milestone in delivering to the World’s scientific community a realistic, cost-effective, method for achieving performance in the Peta-flops (1000 trillion floating-point operations per second) range on standard benchmarks such as Linpack,” stated Yuefan Deng, PhD (Columbia University), CEO of RedNeurons and Professor of Applied Mathematics at Stony Brook University (SUNY). A 20 year veteran of the HPC research field, Dr. Deng stated “Tensor is an apt name for this patent-pending architecture, as it embodies a true multi-dimensional full mesh topology. Tensor equations were favored by Einstein as a simple way to describe the complex multi-dimensionality in General Relativity. By balancing the network fabric evenly with the processor, we have managed to reduce cabling complexity, increase scalability, and preserve the use of standard processors and HPC legacy programs created with the standard high-level languages and MPI (message passing interface) functions.”

Jack Dongarra, Distinguished Professor of Computer Science at the University of Tennessee and primary author of the Linpack benchmarking library, said, “Dr. Deng’s team in Shanghai has designed and completed initial benchmark testing on a new HPC platform in under 12 months; this is unprecedented.”

Dr. Chi Xuebin, PhD (Chinese Academy of Science, CAS), and a frequent contributor in the HPC research field stated, “RedNeurons has an approach that offers a startling advantage in acquisition and operational costs over other approaches; their MPU system integrates the network, computational, and storage resources in a very beneficial, balanced manner.”

The Tensor MPU2016, a second generation HPC development platform developed with support from the People’s Republic of China Ministry of Science and Technology and Shanghai Science and Technology Commission, as well as venture capital firms, is currently being used for development of the interconnect hardware and software logic for the third generation RedNeurons Tensor MPU3064 platform, which may form the foundation for a 100 Tera-flops machine slated to be constructed next year.

According to RedNeuron’s CTO, Alexander Korobka, PhD (SUNY-Stony Brook), “The Tensor MPU2016, with 16 processor cards containing Freescale 8641D SoC (system on chip) processors and Xilinx Virtex-4FX FPGAs, is an ideal platform for companies which may be working on the development of high performance solutions for the embedded systems market. MPU, Master Processing Unit, is a novel approach that provides high density and reliability while preserving CPU and interconnect flexibility. Initial performance tests resulted in achieving a HP Linpack benchmark score of 35 Giga-flops for a single MPU2016, which triples the performance demonstrated by the prototype Tensor MPU1016 system produced by RedNeurons in the first quarter of 2007.” The MPU2016 has been tested with several other benchmarks such as NAS for computational fluid dynamics and NAMD for molecular dynamics.

RedNeurons is a leading High Performance Computing technology design firm specializing in the use of embedded systems components and advanced interconnection architectures to drive practical and cost-effective HPC across a broad range of form factors and sizes.