Giter VIP home page Giter VIP logo

clblast's Introduction

CLBlast: The tuned OpenCL BLAS library

Build status Tests on Intel CPU Tests on NVIDIA GPU Tests on AMD GPU
Windows Build Status N/A N/A N/A
Linux Build Status Build Status Build Status Build Status
OS X Build Status Build Status N/A N/A

CLBlast is a modern, lightweight, performant and tunable OpenCL BLAS library written in C++11. It is designed to leverage the full performance potential of a wide variety of OpenCL devices from different vendors, including desktop and laptop GPUs, embedded GPUs, and other accelerators. CLBlast implements BLAS routines: basic linear algebra subprograms operating on vectors and matrices. See the CLBlast website for performance reports on various devices as well as the latest CLBlast news.

The library is not tuned for all possible OpenCL devices: if out-of-the-box performance is poor, please run the tuners first. See below for a list of already tuned devices and instructions on how to tune yourself and contribute to future releases of the CLBlast library. See also the CLBlast feature roadmap to get an indication of the future of CLBlast.

Why CLBlast and not clBLAS or cuBLAS?

Use CLBlast instead of clBLAS:

  • When you care about achieving maximum performance.
  • When you want to be able to inspect the BLAS kernels or easily customize them to your needs.
  • When you run on exotic OpenCL devices for which you need to tune yourself.
  • When you are still running on OpenCL 1.1 hardware.
  • When you prefer a C++ API over a C API (C API also available in CLBlast).
  • When you value an organized and modern C++ codebase.
  • When you target Intel CPUs and GPUs or embedded devices.
  • When you can benefit from the increased performance of half-precision fp16 data-types.

Use CLBlast instead of cuBLAS:

  • When you want your code to run on devices other than NVIDIA CUDA-enabled GPUs.
  • When you want to tune for a specific configuration (e.g. rectangular matrix-sizes).
  • When you sleep better if you know that the library you use is open-source.
  • When you are using OpenCL rather than CUDA.

When not to use CLBlast:

  • When you run on NVIDIA's CUDA-enabled GPUs only and can benefit from cuBLAS's assembly-level tuned kernels.

Getting started

CLBlast can be compiled with minimal dependencies (apart from OpenCL) in the usual CMake-way, e.g.:

mkdir build && cd build
cmake ..
make

Detailed instructions for various platforms can be found are here.

Like clBLAS and cuBLAS, CLBlast also requires OpenCL device buffers as arguments to its routines. This means you'll have full control over the OpenCL buffers and the host-device memory transfers. CLBlast's API is designed to resemble clBLAS's C API as much as possible, requiring little integration effort in case clBLAS was previously used. Using CLBlast starts by including the C++ header:

#include <clblast.h>

Or alternatively the plain C version:

#include <clblast_c.h>

Afterwards, any of CLBlast's routines can be called directly: there is no need to initialize the library. The available routines and the required arguments are described in the above mentioned include files and the included API documentation. The API is kept as close as possible to the Netlib BLAS and the cuBLAS/clBLAS APIs. For an overview of the supported routines, see here.

To get started quickly, a couple of stand-alone example programs are included in the samples subfolder. They can optionally be compiled using the CMake infrastructure of CLBlast by providing the -DSAMPLES=ON flag, for example as follows:

cmake -DSAMPLES=ON ..

Afterwards, you can optionally read more about running proper benchmarks and tuning the library.

Full documentation

More detailed documentation is available in separate files:

Known issues

Known performance related issues:

  • Severe performance issues with Beignet v1.3.0 due to missing support for local memory. Please downgrade to v1.2.1 or upgrade to v1.3.1 or newer.

  • Performance issues on Qualcomm Adreno GPUs.

Other known issues:

  • Routines returning an integer are currently not properly tested for half-precision FP16: IHAMAX/IHAMIN/IHMAX/IHMIN

  • Half-precision FP16 tests might sometimes fail based on order multiplication, i.e. (a * b) * c != (c * b) * a

  • The AMD APP SDK has a bug causing a conflict with libstdc++, resulting in a segfault when initialising static variables. This has been reported to occur with the CLBlast tuners.

  • The AMD run-time compiler has a bug causing it to get stuck in an infinite loop. This is reported to happen occasionally when tuning the CLBlast GEMM routine.

Contributing

Contributions are welcome in the form of tuning results for OpenCL devices previously untested or pull requests. See the contributing guidelines for more details.

The main contributing authors (code, pull requests, testing) are:

Tuning and testing on a variety of OpenCL devices was made possible by:

Hardware/software for this project was contributed by:

  • ArrayFire for settings up and supporting Jenkins CI correctness tests on 7 platforms
  • JetBrains for supply a free CLion IDE license for CLBlast developers
  • Travis CI and AppVeyor for free automated build tests for open-source projects

More information

Further information on CLBlast is available through the following links:

How to cite this work:

C. Nugteren. CLBlast: A Tuned OpenCL BLAS Library. ArXiv pre-print 1705.05249, 2017.

Support us

This project started in March 2015 as an evenings and weekends free-time project next to a full-time job for Cedric Nugteren. If you are in the position to support the project by OpenCL-hardware donations or otherwise, please find contact information on the website of the main author.

clblast's People

Contributors

cnugteren avatar intelfx avatar mcian avatar psyhtest avatar kpot avatar gcp avatar shehzan10 avatar dvasschemacq avatar gfursin avatar gpu avatar sivagnanamn avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.