Driveworks_tensorrt_optimization tool

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.3.10904
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi,

I have a model that, I converted into ONNX using ATEN_FALLBACK.

  1. I compiled using driveworks tensorrt_optimization_tool. I see the binary getting generated but with some errors. Drive OS 6.0.8, uses tensorrt 8.6.11 as per Release notes.

  2. I tried with trtexec and I saw different errors. Tensorrt version - 8.6.1.6-1+cuda12.0

  3. I used polygraphy.
    a. polygraphy surgeon sanitize to fold-constants and many nodes got optimized.
    b. Next I used polygraphy run --onnxrt to create onnx runtime inference session to run the onnx graph. It throws the same error as driveworks tensorrt tool.

Now my question is since TopK with Int32 is already available in Tensorrt from 8.5 version, why is it failing in my case. Why is driveworks not throwing error for TopK, whereas it is giving other error.

Could you please provide the ONNX model, specific command and the corresponding output?

Is it possible to run the trtexec command on the devkit?

Dear @VickNV

Is it possible to run the trtexec command on the devkit?

Yes i ran using trtexec and now i get similar error. Though ToPk is already present in tensorrt 8.5.
folded (copy).txt (51.2 MB)

Dear @VickNV

I tried to compile the onnx graph in trtexec first. But i face a strange issue in Reshape. Used polygraphy surgen santize, onnxsimply by didn’t help.
Code works perfectly in pytorch but tensorrt compilation fails.

Here is the onnx graph
permute_geom_feats_int32_ranks_features_sorts_fixed.txt (51.5 MB)

Is the model generated using tensorrt_optimization tool working? Could you share the model and used command to repro the issue.

Dear @SivaRamaKrishnaNV

No I used trtexec to compile the model. The model doesn’t gets generated in trtexec due to Reshape error but somehow gets generated in dw_optimization_tool.

For trtexec it was ./trtexec --onnx= --saveEngine=<path.engine>

Is this your ONNX model?

Yes. I have just renamed it as txt.

Dear @arkos ,
I ran trtexec using folder.onnx model in DRIVE OS 6.0.8.1 and noticed below messages in log. Is this model working before trying ONNX → TRT conversion?

[02/27/2024-08:21:00] [W] [TRT] /Reshape_10: IShuffleLayer with zeroIsPlaceHolder=true has reshape dimension at position 2 that might or might not be zero. TensorRT resolves it at runtime, but this may cause excessive memory consumption and is usually a sign of a bug in the network.


Parsing node: /Reshape_10 [Reshape]
[02/27/2024-08:21:00] [V] [TRT] Searching for input: /GatherND_3_output_0
[02/27/2024-08:21:00] [V] [TRT] Searching for input: /Concat_5_output_0
[02/27/2024-08:21:00] [V] [TRT] /Reshape_10 [Reshape] inputs: [/GatherND_3_output_0 -> (-1, 4)[INT32]], [/Concat_5_output_0 -> (4)[INT32]],
[02/27/2024-08:21:00] [V] [TRT] Registering layer: /Reshape_10 for ONNX node: /Reshape_10
[02/27/2024-08:21:00] [V] [TRT] Registering tensor: /Reshape_10_output_0 for ONNX tensor: /Reshape_10_output_0
[02/27/2024-08:21:00] [V] [TRT] /Reshape_10 [Reshape] outputs: [/Reshape_10_output_0 -> (1, 1, -1, 64)[INT32]],



[02/27/2024-08:24:32] [I] Output binding for output with dimensions 1x2x104x104 is created.
[02/27/2024-08:24:32] [I] Starting inference
[02/27/2024-08:24:32] [E] Error[7]: [shapeMachine.cpp::executeContinuation::864] Error Code 7: Internal Error (IShuffleLayer /Reshape_10: reshaping failed for tensor: /GatherND_3_output_0 reshape would change volume 1516 to 24256 Instruction: RESHAPE_ZERO_IS_PLACEHOLDER{379 4} {1 1 379 64}.)
[02/27/2024-08:24:32] [E] Error occurred during inference
&&&& FAILED TensorRT.trtexec [TensorRT v8611] # /usr/src/tensorrt/bin/trtexec --onnx=./folded.onnx --saveEngine=./folded-new.trt --verbose

```

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.