Actually the discrepancy between the output of trtexec and netron/onnx starts one step back in the graph:
In the flatten op that feeds the gather, trtexec’s input and output tensors are the same:
[06/15/2022-23:45:05] [V] [TRT] Flatten_1172 [Flatten] inputs: [2337 → (1, 81840)[FLOAT]],
[06/15/2022-23:45:05] [V] [TRT] Registering tensor: 2453 for ONNX tensor: 2453
[06/15/2022-23:45:05] [V] [TRT] Flatten_1172 [Flatten] outputs: [2453 → (1, 81840)[FLOAT]],
The trtexec --verbose log was attached in the original post.
The model is just torchvisions fasterrcnn resnet50 exported to onnx using torch.onnx.export. Shall i upload the onnx file here?
I will attempt to use onnx-tensorrt, but it should be noted that trtexec is in your developer and quickstart quides as the method to use for onnx to tensorRT conversion. Please update if it is truly deprecated.
Parsing model
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[2022-06-16 22:08:57 ERROR] [graphShapeAnalyzer.cpp::analyzeShapes::1285] Error Code 4: Miscellaneous (IShuffleLayer Reshape_1179: reshape changes volume. Reshaping [343728000] to [1,4200].)
While parsing node number 357 [Reshape → “2462”]:
ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Reshape_1179
[graphShapeAnalyzer.cpp::analyzeShapes::1285] Error Code 4: Miscellaneous (IShuffleLayer Reshape_1179: reshape changes volume. Reshaping [343728000] to [1,4200].)
@spolisetty have you had a chance to look at this?
For some wider context, we really need an open source object detection model (that we can modify) that will convert to a TensorRT engine (so we can run in Deepstream on Xavier NX). We have tried many models and conversion tools without success.
We appreciate any advice on this, if anyone knows of any decent pytorch detection model source that definitely converts to TRT without error (on L4T), please let me know.