CUDA (Compute Unified Device Architecture)

What is CUDA?

NVIDIA CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With NVIDIA CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs. NVIDIA CUDA provides a development environment for creating high performance GPU-accelerated applications, including libraries, tools, compilers, and runtime support. NVIDIA CUDA supports various architectures, languages, and platforms, and enables developers to optimize and deploy their applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers .
Snippet from Wikipedia: CUDA

CUDA (Compute Unified Device Architecture) is a proprietary and closed-source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels.

CUDA is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming. CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL; and HIP by compiling such code to CUDA.

CUDA was created by Nvidia. When it was first introduced, the name was an acronym for Compute Unified Device Architecture, but Nvidia later dropped the common use of the acronym.

External links:

  • kb/cuda.txt
  • Last modified: 2023/04/08 08:11
  • by Henrik Yllemo