Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.1.10844
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
After I converted resnet18.onnx to tensorrt engine with DLA, I want to instantiate 3 objects to do somothing using the converted engine file, but the moment I instantiate the third object I will get an error “[trt] 1: [nvdlaUtils.cpp::deserialize :: 154] Error code 1: DLA (NvMediaDlaInit : Init failed.)”. I don’t know what’s going on, please help.
I use c++ code for this experiment. My code is something like:
1. Model model1("test.engine"); //init model1
2. Model model2("test.engine"); //init model2
3. Model model3("test.engine"); //init model3 will get the error described before
The strange is when call model1.~Model() after Model model1(…), Model model3(…) can be inited successfully.
1. Model model1("test.engine"); //init model1
2. model1.~Model();
2. Model model2("test.engine"); //init model2
3. Model model3("test.engine"); //init model3 will be successfully inited
Dear @liuhaomin1,
It looks like you are generating trt model for same DLA. Could you share your repro c++ code? How about generating 3 rd model for different DLA and test? Is it ok?
Does “different DLA” mean when converting an onnx model to trt engine the arg "–useDLACore={nums} " should be set to different num value? And How am I gonna know how many DLACores I have on the device tegra orin?
Dear @liuhaomin1,
It has two DLAs. So you can generate two DLA models corresponding to DLA 0 and DLA1 and check running them in parallel. Running two models in parallel on same DLA may not be possible.
You can quickly verify inferencing with two DLA in parallel using trtexec and let us know if you see any issues.
OK, could u please give me an example of how to run 2 models in parallel using trtexec . For example , if I have a model test.onnx , is the command should be like this:
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Dear @liuhaomin1,
Does this mean when you use --useDLACore=0, it is working and you see issue with --useDLACore=1. I notice from TensorRT model use too much memory on DriveOrin - #6 by liuhaomin1 you seems to have same issue with other models as well.
Can you restart the DRIVE AGX Orin Devkit and check again?