Electronic Design

Mathworks Matlab GPU Q and A

 

 

1 of Enlarge image
 

 

 

GPUs are ubiquitous and fast. They have transformed graphics rendering virtual 3D worlds in real time. They are also becoming major workhorses in supercomputing. The Mathworks Matlab GPU support is a major step forward for developers that take advantage of this tool especially since most of the applications tend towards matrix calculations where GPUs excel.

To get the low down on Matlab's GPU enhancements I spoke with Silvina Grad-Freilich, Manager of Parallel-Computing Marketing at MathWorks.

Wong: Why is GPU support important to the parallel computing industry?

Grad-Freilich: The need for computational power for scientific and engineering applications continues to increase at an accelerated pace. Engineering, Financial and Scientific users want to get their answers quicker and are solving larger problems than they previously were able to. As a result, demand computational power continues to grow quickly. Users can pack significant computing power (potentially comparable to a small cluster) in a single high end workstation by adding GPUs, without having to invest in additional computer systems. With the development of programming tools like NVIDIA CUDA, GPUs are gaining acceptance in the technical computing industry whereas previously they were the domain of expert gaming programmers only.

Wong: What benefits does the use of GPUs provide users?

Grad-Freilich: GPUs provide users the ability to accelerate certain operations significantly as compared to a multicore CPU. Used in combination with multicore CPUs and computer clusters, GPUs hold promise of significant application speedups (provided applications are GPU enabled).

Wong: GPU accelerators are not new, so why is the industry just showing interest now?

Grad-Freilich: In the past, programming GPUs was the domain of programmers in the gaming industry who were looking to provide highly realistic gaming experience to their customers. With the development of technologies such as NVIDIA CUDA, GPU designers have expanded the audience to include the technical and scientific computing world as well.

In addition, GPUs now include support for higher precision arithmetic and IEEE compliant operations, which is an absolute must for mainstream technical and scientific computing projects. There is a growing ecosystem around CUDA technology - for example, BLAS, LAPACK libraries have been implemented. The result is this gradual "mainstreaming" of GPU technology. Note that although CUDA technology has significantly reduced the programming effort for GPUs, it is still hard for regular engineers and scientists (those who are not computer scientists) to program using the technology.

 

Wong: What industries are adopting GPUs and how is it being used?

Grad-Freilich: During the beta customer program we ran in early 2010 to test support for GPUs from MATLAB, users cited a wide variety of applications for which they planned to use MATLAB GPU computing functionality, including financial analysis, signal and image processing in domains like remote sensing, medical imaging, radar signal processing, Monte Carlo simulations, seismic data processing, and bioinformatics applications including medical imaging etc. In general, there was a spread of applications across all the industries in which MATLAB is used.

How is MathWorks incorporating GPU support into their products? For example, what do developers need to do to take advantage of a GPU? What new syntax or functions can developers use to utilize GPU computing features and what libraries take advantage of a GPU?

MathWorks Parallel Computing Toolbox lets MATLAB users take advantage of multicore processors, GPUs, and computer clusters without low-level C or Fortran programming. The toolbox provides a special array type - GPUArray - which enables users to hold data in GPU memory. Regular MATLAB functions, when they detect that their inputs are GPUArrays, execute on the GPU. Thus, users don't have to spend time changing their existing algorithms to access GPU computing. We have already adapted many MATLAB functions to be able to make use of GPUs and this number will continue to grow as we add more functionality.

More advanced users that have already written their own CUDA code, can invoke CUDA kernels directly from MATLAB without going through the process of writing glue code between MATLAB and CUDA kernels. This enables them to access hand-tuned, custom CUDA kernels right from MATLAB to supplement the functionality already available via our products. For example, consider a simple MATLAB function such as the following:

function \\[maxVal, checkVal\\] = myfunction(A, b)
  x = A\b; % Solves a system of linear equations Ax=b
 maxVal = max(x(:));
 checkVal = max(A*x - b);
end

In a regular MATLAB session, a MATLAB user can invoke the function as follows:
>> A = rand(1000); b = rand(1000, 1);
>> \\[maxV, checkV\\] = myfunction(A, b);

For taking advantage of GPUs for this function, MATLAB user can create a GPUArray by transferring A and b to GPUs and calling the same function. Note that the function is not changed at all.

>> gA = gpuArray(A); gb = gpuArray(b);
>> \\[gmaxV, gcheckV\\] = myfunction(gA, gb);

Thus, by simply changing the type of inputs MATLAB users are able to take advantage of the GPUs for their application.* Parallel computing toolbox supports CUDA-enabled devices with a compute capability of 1.3 or higher because of their double-precision floating point support, IEEE compliance, and cross platform support, which are a must for MATLAB users such as engineers, scientists, and quantitative analysts.

* Note: Not all MATLAB functions are GPU-enabled at this time.

Wong: Can MATLAB take advantage of multiple GPUs?

Grad-Freilich: Yes. Parallel Computing Toolbox provides mechanisms to run multiple MATLAB workers in a desktop machine, each of which can communicate with a GPU. Users can also use MATLAB Distributed Computing Server, a MathWorks product targeted for larger resources such as clusters and cloud computing resources, to run their GPU code on GPU-equipped clusters and cloud computing resource without further code changes. In fact, at SC10 in November, we were using the latest GPU-equipped Amazon EC2 Compute Cluster instances in our booth for demos.

Wong: What kinds of speed ups can developers get by using a GPU?

Grad-Freilich: In our GPU benchmarks, we have seen similar results to the ones that NVIDIA obtains using the CUDA interface directly. However, it is impossible to give a generic answer for user developed applications given that speedup is highly dependent on the application itself and the hardware configuration being used. For example, variations in CPU+GPU combination can yield widely different performance results. In addition, correct benchmarks should factor in data transfer time from CPU to GPU memory and vice versa.

There are two examples on our website that show speedups with GPU (1, 2) using double-precision full-matrix computations for relatively small to moderate sized problems.

Wong: Does MATLAB's GPU support provide integration with OpenCL or CUDA and what GPU platforms does MATLAB support?

Grad-Freilich: At the moment, GPU support in MATLAB is available for NVIDIA's CUDA-enabled GPUs that have double precision support (compute capability version 1.3 and above). These are supported on all platforms that MATLAB supports - Windows, Linux and Mac.

Wong: What happens if a GPU is not available on a system?

Grad-Freilich: Transferring data that needs to be processed on to GPU memory is the first step that a user has to undertake when trying to use GPU computing functionality. This is not possible without a GPU present. Provided there are no other specific calls to GPU functionality such as call to transfer data to GPU a MATLAB function could still receive regular inputs and work normally.

TAGS: Interviews
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish