Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.10.0
[1] DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
[1] Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[1]DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
2.1.0
[1] other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Issue Description
Hi team, I want to modify the sample dnn plugin as per my own mnist model which was trained in pytorch and then I want to convert it into TRT mode using tensorRT optimization tool.
But I came across following errors.
Kindly take a loot at my errors and help me accordingly!
a successful model conversion looks like this, check the input and output shapes from my onnx model (it is static; a negative sign would imply shapes are dynamic)
Dear @ashwin.nanda
I had .pt file of mnist, then I converted it into ONNX. By using trtexec, I converted this onnx file to .engine file. Can I use this .engine file for inference?
yes you can but with Tensorrt APIs in c++/python not in driveworks-dnn unless you want to bridge some custom trt-plugins. to use dw dnn apis, you need to use the tensorrt_optimsation tool on the onnx file like i showed above.
Dear @ashwin.nanda Thank you so much for this quick reply.
I want to try this .engine file which I created from trtexec. Could you please suggest me a sample where I can load this .engine file and get inference?
dear @akshay.tupkar i see that you have a topic open where you already testing the sample. if this sample is from the host local tensorrt installation, then you can load the engine and test it. you might also need to look for processing before and after inference.