Driveworks_tensorrt_optimization tool

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.3.10904
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi,

I have a model that, I converted into ONNX using ATEN_FALLBACK.

  1. I compiled using driveworks tensorrt_optimization_tool. I see the binary getting generated but with some errors. Drive OS 6.0.8, uses tensorrt 8.6.11 as per Release notes.

  2. I tried with trtexec and I saw different errors. Tensorrt version - 8.6.1.6-1+cuda12.0

  3. I used polygraphy.
    a. polygraphy surgeon sanitize to fold-constants and many nodes got optimized.
    b. Next I used polygraphy run --onnxrt to create onnx runtime inference session to run the onnx graph. It throws the same error as driveworks tensorrt tool.

Now my question is since TopK with Int32 is already available in Tensorrt from 8.5 version, why is it failing in my case. Why is driveworks not throwing error for TopK, whereas it is giving other error.

Could you please provide the ONNX model, specific command and the corresponding output?

Is it possible to run the trtexec command on the devkit?

Dear @VickNV

Is it possible to run the trtexec command on the devkit?

Yes i ran using trtexec and now i get similar error. Though ToPk is already present in tensorrt 8.5.
folded (copy).txt (51.2 MB)

Dear @VickNV

I tried to compile the onnx graph in trtexec first. But i face a strange issue in Reshape. Used polygraphy surgen santize, onnxsimply by didn’t help.
Code works perfectly in pytorch but tensorrt compilation fails.

Here is the onnx graph
permute_geom_feats_int32_ranks_features_sorts_fixed.txt (51.5 MB)

Is the model generated using tensorrt_optimization tool working? Could you share the model and used command to repro the issue.

Dear @SivaRamaKrishnaNV

No I used trtexec to compile the model. The model doesn’t gets generated in trtexec due to Reshape error but somehow gets generated in dw_optimization_tool.

For trtexec it was ./trtexec --onnx= --saveEngine=<path.engine>

Is this your ONNX model?