DLA optimization, pure virtual method callled error

Hi,

Running the latest TensorRT 5.1.6 optimization on Xavier for float16 and int8 works well. When I activate DLA using:

setDefaultDeviceType(nvinfer1::DeviceType::kDLA);
setDLACore(0);
allowGPUFallback(true);

an error concerning a pure virtual method called occurs with any float16 or int8 configuration and seems to originate from libnvdla_compiler.so. I use a onnx model parsed using a parser created with the provided createParser factory method:

IParser* createParser(nvinfer1::INetworkDefinition* network, nvinfer1::ILogger* logger)

In case of DLA, do I need to overwrite a pure virtual method that is not required when DLA is disabled?

Logs and backtrace:

[2019-09-11 16:44:33.887] [demo_app] [debug] Applying generic optimizations to the graph for inference.
[2019-09-11 16:44:33.887] [demo_app] [debug] Original: 246 layers
[2019-09-11 16:44:33.888] [demo_app] [debug] After dead-layer removal: 246 layers
pure virtual method called
terminate called without an active exception

Thread 6 “kg_demo” received signal SIGABRT, Aborted.
[Switching to Thread 0x7f61e5a470 (LWP 25445)]
__GI_raise (sig=sig@entry=6) at …/sysdeps/unix/sysv/linux/raise.c:51
51 …/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x0000007f8c0e94d8 in __GI_raise (sig=sig@entry=6) at …/sysdeps/unix/sysv/linux/raise.c:51
#1 0x0000007f8c0ea8b4 in __GI_abort () at abort.c:79
#2 0x0000007f8c3ad034 in __gnu_cxx::__verbose_terminate_handler() () at /usr/lib/aarch64-linux-gnu/libstdc++.so.6
#3 0x0000007f8c3aac34 in () at /usr/lib/aarch64-linux-gnu/libstdc++.so.6
#4 0x0000007f8c3aac80 in () at /usr/lib/aarch64-linux-gnu/libstdc++.so.6
#5 0x0000007f8c3abb70 in __cxa_deleted_virtual () at /usr/lib/aarch64-linux-gnu/libstdc++.so.6
#6 0x0000007f7f23bd30 in () at /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so
#7 0x0000007f7f2640d8 in () at /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so
#8 0x0000007f82be9914 in nvinfer1::builder::dla::addConvolutionNode(nvdla::INetwork&, nvinfer1::builder::ConvolutionNode&, std::map<std::string, nvdla::ITensor*, std::lessstd::string, std::allocator<std::pair<std::string const, nvdla::ITensor*> > >&) () at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#9 0x0000007f82be3124 in nvinfer1::builder::dla::validateGraphNode(std::unique_ptr<nvinfer1::builder::Node, std::default_deletenvinfer1::builder::Node > const&) ()
at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#10 0x0000007f82b50e78 in nvinfer1::builder::validateForeignGraphNode(std::unique_ptr<nvinfer1::builder::Node, std::default_deletenvinfer1::builder::Node > const&, bool, bool) ()
at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#11 0x0000007f82b549ac in nvinfer1::builder::createForeignNodes(nvinfer1::builder::Graph&, nvinfer1::builder::ForeignNode* (*)(nvinfer1::Backend, std::string const&), nvinfer1::CudaEngineBuildConfig const&) ()
at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#12 0x0000007f82b23ba0 in nvinfer1::builder::applyGenericOptimizations(nvinfer1::builder::Graph&, nvinfer1::CudaEngineBuildConfig const&) () at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#13 0x0000007f82b79c34 in nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, nvinfer1::rt::HardwareContext const&, nvinfer1::Network const&) () at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#14 0x0000007f82bd24f0 in nvinfer1::builder::Builder::buildCudaEngine(nvinfer1::INetworkDefinition&) () at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#15 0x0000007fb7ea1098 in kogtrack::rt::Network::Network(std::shared_ptrkogtrack::rt::NetworkBuilder, std::shared_ptrkogtrack::rt::NetworkDefinition) () at /usr/local/lib/libkogtrack.so.1

Hi,

To use DLA, you will need to bind TensorRT engine with DLA in both building/inferencing process.

// <b>Build</b>
builder->setFp16Mode(true);
builder->setDefaultDeviceType( nvinfer1::DeviceType::kDLA );
builder->setDLACore(0);

// <b>Inference</b>
infer->setDLACore(0);

Thanks.