Running tf_to_trt_image_classification for Xavier with NVDLA support

Hello all,

I have followed the modifications below and I can successfully convert the inceptionv1 pb to a plan file and run it on Xavier:

https://devtalk.nvidia.com/default/topic/1043619/jetson-agx-xavier/jetson-tf_to_trt_image_classification/post/5295278/#5295278

Now in order to make that even better, I want to run it on NVDLA.

Inspired by the TensorRT5 examples which run with dla, I added this code to uff_to_plan.cpp right after “builder->setMaxWorkspaceSize(maxWorkspaceSize);” and rebuilt the project:

if (gDLA > 0) enableDLA(builder, gDLA);

Where gDla is set to 2 and enableDlA is:

inline void enableDLA(IBuilder* b, int dlaID)
{
    b->allowGPUFallback(true);
    b->setFp16Mode(true);
    b->setDefaultDeviceType(static_cast<DeviceType>(dlaID));
}

However, the output I get is:

./build/examples/classify_image/classify_image data/images/gordon_setter.jpg data/plans/inception_v1.plan data/imagenet_labels_1001.txt input InceptionV1/Logits/SpatialSqueeze inception
Loading TensorRT engine from plan file...
Preprocessing input...
Executing inference engine...

The top-5 indices are: 215 235 166 214 213 
Which corresponds to class labels: 
0. Gordon setter
1. Rottweiler
2. black-and-tan coonhound
3. Irish setter, red setter
4. English setter
dla/eglUtils.cpp (121) - EGL Error in validateEglStream: 12289
terminate called after throwing an instance of 'nvinfer1::EglError'
  what():  std::exception
Aborted (core dumped)

I changed the float to half for scripts/convert_plan.py and that also didn’t help. I must add that the output for this command shows that a lot of layers are scheduled to run on dla.

Does anyone know the problem here?

Hi,

AFAIK, TensorFlow-TRT haven’t enabled DLA support yet.
[url]https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt/convert[/url]

Currently, you will need to use native C++ TensorRT sample to run a task on DLA.

Thanks.