Today's GPUs are massively parallel devices which provide programmers with TeraFlops supercomputing performance. But programming these devices and exploiting their fantastic potential is not always easy and might discourage application developers. CUDA for example is too often seen as a very low level and complicated language, although its performance it widely recognised. In this lecture, we will present a more modern and higher level approach of GPU computing with CUDA, using the Thrust library. A quick trajectory from a first "hello world" program to usable real-world teraflopic computation will be provided, proving that exploiting all the potential of modern GPUs is far less complicated that it seems.
University of Illinois at Urbana-Champaign
After completing two Master degrees in Scientific Computing and Mathematics, Gilles joined in February 1998 the R&D team of Électricité de France to develop and maintain nuclear power plant simulation codes. Then he became in 2001 a support scientist at CEA/CCRT, one of the largest HPC centres in Europe, where he was involved in installing, developing, debugging and optimising codes across many scientific fields. Gilles then joined Bull's HPC benchmarking team in 2004, where he contributed to the design and deployment of HPC systems of all scale, including some of the most powerful of the Top500 machines. He joined ICHEC in June 2008 where his first role was to manage all activities in support of users on ICHEC's IBM BlueGene/P machine. In May 2009, Gilles was appointed as Head of the newly created Capability Computing and Novel Architectures group, with an extended remit including the management of ICHEC's rapidly growing GPU Computing activities, and a technology watch report, monitoring any cutting edge hardware and software on the HPC market. As such, Gilles is also Principal Investigator of the NVIDIA CUDA Research Center, title awarded in June 2010.
In his present role, Gilles is particularly involved all aspects related to novel architectures and their programming languages, such as CUDA, OpenCL, HMPP and OpenACC.
Researchers should cite this work as follows:
New York University, New York, NY