Error when enabling DLA for DetectNet (Solved)

In the following snippet of code:

builder->setMaxBatchSize(maxBatchSize);
builder->setMaxWorkspaceSize(16 << 20);
         
builder->allowGPUFallback(true);
builder->setFp16Mode(true);
builder->setDefaultDeviceType(static_cast<nvinfer1::DeviceType>(2));

// set up the network for paired-fp16 format
if(mEnableFP16)
	builder->setHalf2Mode(true);

nvinfer1::ICudaEngine* engine = builder->buildCudaEngine(*network);

builder->setDefaultDeviceType(static_castnvinfer1::DeviceType(2)); This line is generating the error.

LOG:

[GIE]  TensorRT version 5.0, build 0
[GIE]  attempting to open cache file networks/1280x720/snapshot_iter_8938250.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform has FP16 support.
[GIE]  loading networks/1280x720/deploy.prototxt networks/1280x720/snapshot_iter_8938250.caffemodel
[GIE]  retrieved output tensor 'coverage'
[GIE]  retrieved output tensor 'bboxes'
[GIE]  configuring CUDA engine
[GIE]  building CUDA engine
[GIE]  ../builder/cudnnBuilder2.cpp (689) - Misc Error in buildSingleLayer: 1 (Unable to process layer.)
[GIE]  ../builder/cudnnBuilder2.cpp (689) - Misc Error in buildSingleLayer: 1 (Unable to process layer.)
[GIE]  failed to build CUDA engine
failed to load networks/1280x720/snapshot_iter_8938250.caffemodel
detectNet -- failed to initialize.
detectnet-pipeline:   failed to initialize imageNet
nvbuf_utils: dmabuf_fd 1105 mapped entry NOT found
Segmentation fault (core dumped)

How do I fix this issue?

It looks like you are using GPU fallback but there is an error when TRT is building the network.

From the console log, it appears that you are using a fork of jetson-inference. I have been adding DLA support to the dev branch here: [url]https://github.com/dusty-nv/jetson-inference/tree/dev[/url]

You may want to try maxBatchSize of 1, since I notice your model is 1280x720. Also you will want to try the trtexec program that comes with TRT samples just to make sure it isn’t a user error.

Note that official DLA support is limited to Alexnet, Googlenet, and Resnet in the early-access JetPack. I will have to try to run this later as well.

my model is 1200*1920, also

../builder/cudnnBuilder2.cpp (728) - Misc Error in buildSingleLayer: 1 (Unable to process layer.)
 ../builder/cudnnBuilder2.cpp (728) - Misc Error in buildSingleLayer: 1 (Unable to process layer.)

Is this because my input is too large?

I have tried other model with input size 640*640, it’s ok