dla core is not enabled for sample_mnist

I was trying to use the dla core in my GPU. I use the container dockernvcr.io/nvidia/tensorrt:19.09-py2. I tried running sample_minst (in tensorrt/bin) as follows: .

bin/sample_mnist --useDLACore=0 --fp16

and I get the following error:

&&&& RUNNING TensorRT.sample_mnist # bin/sample_mnist --useDLACore=0 --fp16
[09/17/2019-03:13:54] [I] Building and running a GPU inference engine for MNIST
Trying to use DLA core 0 on a platform that doesn’t have any DLA cores
sample_mnist: …/common/common.h:584: void samplesCommon::enableDLA(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*, int, bool): Assertion `“Error: use DLA core on a platfrom that doesn’t have any DLA cores” && false’ failed.
Aborted (core dumped)

May I know what I am doing wrong? Should I do any additional steps to enable DLA core? Here is the output of nvidia-smi

Thu Oct 17 03:14:50 2019
±----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro GV100 Off | 00000000:03:00.0 Off | Off |
| 34% 47C P2 28W / 250W | 1MiB / 32508MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------

Currently, DLA cores are available on the mobile Xavier platform. GV100 is not equipped with NVLDA cores. For more information, please see http://nvdla.org/

Thanks. How about Tensor cores? The spec says that it has 640 Tensor cores:https://www.pny.com/nvidia-quadro-gv100.
How does Tensorrt utilize those?

yes, gv100 is equipped with tensor cores. Tensor core is automatically enabled when a model is configured in int8 or fp16 mode. TensorRT will choose the most performance optimal kernel to perform inference. For more information, please reference:

https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#mixed_precision_c

I’m pretty sorry, but for our code that we use in tensorrt we didn’t find tensor cores being enabled. Are there any conditions to be met to enable tensor cores? We doing simple convolution + relu.