I was trying to use the dla core in my GPU. I use the container dockernvcr.io/nvidia/tensorrt:19.09-py2. I tried running sample_minst (in tensorrt/bin) as follows: .
bin/sample_mnist --useDLACore=0 --fp16
and I get the following error:
&&&& RUNNING TensorRT.sample_mnist # bin/sample_mnist --useDLACore=0 --fp16
[09/17/2019-03:13:54] [I] Building and running a GPU inference engine for MNIST
Trying to use DLA core 0 on a platform that doesn’t have any DLA cores
sample_mnist: …/common/common.h:584: void samplesCommon::enableDLA(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*, int, bool): Assertion `“Error: use DLA core on a platfrom that doesn’t have any DLA cores” && false’ failed.
Aborted (core dumped)
May I know what I am doing wrong? Should I do any additional steps to enable DLA core? Here is the output of nvidia-smi
Thu Oct 17 03:14:50 2019
±----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro GV100 Off | 00000000:03:00.0 Off | Off |
| 34% 47C P2 28W / 250W | 1MiB / 32508MiB | 0% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------