Sent to you via Google Reader
It is hard to imagine performing research without the help of scientific computing. The days of scientists working only at a lab bench or poring over equations are rapidly fading. Today, experiments can be planned based on output from computer simulations, and experimental results are confirmed using computational methods.
For example, the Materials Genome Project is currently plowing through the periodic table looking for structures and chemistries that may lead to enhanced materials for energy applications. By allowing a computer to perform most of the work, researchers can concentrate their valuable time on synthesizing and characterizing a small subset of interesting compounds identified by the search algorithm.
As the scope of scientific research has become more complex, so have the computational methods and hardware required to provide answers to scientific questions. This increasing complexity results in expensive, highly specialized scientific computing equipment that must be shared across multiple departments and research units, and the queue to access the equipment can be unacceptably long. For smaller labs, it can be nearly impossible to get adequate, timely access to critically important computing resources. Sure, there are national user access facilities or toll services, but they can take extraordinarily long times to access or be prohibitively expensive for prolonged projects. In short, high performance scientific computing is largely restricted to large and wealthy research labs.
With these issues in mind, a research team in the Laboratoire de Chimie de la Matière Condensée de Paris (LCMCP) at Chimie ParisTech, led by research engineer Yann Le Du and graduate student Mariem El Afrit, has been building a high performance computational cluster using only commercially available, "gamer" grade hardware. In a series of three articles, Ars will take an in-depth look at the GPU-based cluster being built at the LCMCP. This article will discuss the benefits of GPU-based processing, as well as hardware selection and benchmarking of the cluster. Two future articles will focus on software choices/performance and the parallel processing/neural network algorithms used on the system.