CUDA version: 8.0.50
OS: Ubuntu 16.04
Hardware: DRIVE PX 2, on dGPU (GP106)
Call cublasGemmEx with M=N=K=lda=ldb=ldc=4096, alpha=1, beta=0 (both in int32_t on host), Atype=Btype=CUDA_R_8I, Ctype=computeType=CUDA_R_32I will always return CUBLAS_STATUS_NOT_SUPPORTED, no matter which algorithm I use (CUBLAS_GEMM_DFALT/ALGO0/1/2/3/4/5/6/7).
I noticed that CUDA 8 Performance Overview (released in november 2016, page 22) has benchmark for GEMM with INT8 on Tesla P40 and achieves 32TFLOPS throughtput.
cuBLAS’s main page (https://developer.nvidia.com/cublas, in Key Feature section) also said that cuBLAS supports integer (INT8) matrix multiplication operations.