GPU technology key to exascale computing
Disruptive technologies like the GPU are important steps on the path to exascale computing. With super-computing already an essential tool in modern science, the industry’s work in the space was vitally important to society and the advancement of culture. The disruptive technologies are not initially deemed valuable to the markets they eventually serve and therefore face an uphill battle for acceptance. But what is GPU. GPUs are just such a disruptive technology with the roots as graphics engines invented specially for teenage gamers. As the market grew, however GPUs found guilty first in workstations and recently as accelerators in some of the world’s fastest supercomputers. While accelerating systems is helpful, however the industry needed to use GPUs to push even further into the future in to the future, especially as power constraints were becoming an imperative. Supercomputing is now power limited just like the notebook, the tablet and the cellphone.
CPUs waste inordinate amounts of energy to schedule instructions and move data across the chip, while GPUs were simpler with minimal overhead. While true that graphics processors are not optimized for single threaded performance they do have IEEE floating point compatibility, leading many researchers to wish they could describe all their problems as a triangle. Today some of the NVIDIA family of add-in graphics cards for the Personal Computer (PC) feature powerful GPUs designed to improve 3D graphics and video performance. The 3D technology can be explained on the side lines of what is graphics card. It has also hinted that it has a project in the works that will enable NVIDIA CUDA technology on AMD GPUs. While both NVIDIA and AMD have announced support for open GPCPU standards such as OpenCL and Microsoft DirectX Compute, both companies also have their own GPGPU technologies. Between CUDA and Stream, the former seems to be the strongest. But before that one has to know how CUDA works. It is by far more productive programming environment. It has just got a lot more traction among the people who are programming parallel applications. It’s an easier language to use. Most people think that C with CUDA extensions is the most convenient way to write applications. All these relevant information can be get from the computer graphics tutorials.
Discussing more about mobile graphics technology, it can be said that OpenCL is really a driver interface. Its an API and a set of calls. With a kernel, you basically make an API call with the code for that kernel as a string, and the competition actually happens in the driver on the fly. Being able to write in C for CUDA and running NVCC and pre-compiling your kernel seems to be more efficient way of operating. But that reaching exascale was something of an innovator’s dilemma. GPGPU and cloud computing have been hot topics for the last several years. Intel has shown off several designs like the Single-chip Cloud Computer in the past.