Error converting onnx to TensorRT engine on Xavier

I have [SSD MobileNet V2 FPNLite 640x640 model which I successfully converted to onnx using the following command :
python -m tf2onnx.convert --opset 11 --fold_const --inputs input_tensor:0[1,-1,-1,3] --saved-model C:\Users\RT\Desktop\inference_graph\saved_model --output C:\Users\RT\Desktop\inference_graph\saved_model\model1.onnx

Now on Nvidia Xavier with TensorRT 8.1 ,Im trying to convert the onnx model to tensorrt engine using the below command :
trtexec --onnx=/home/RT/Desktop/model111.onnx --saveEngine=/home/RT/Desktop/model111.trt

Im getting the below error :
[12/09/2021-16:53:41] [E] [TRT] ModelImporter.cpp:720: While parsing node number 3529 [TopK → “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0”]:
6:53:41] [E] [TRT] ModelImporter.cpp:721: — Begin node —
[12/09/2021-16:53:41] [E] [TRT] ModelImporter.cpp:722: input: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Concatenate/concat_1:0”
input: “Unsqueeze__4691:0”
output: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0”
output: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:1”
name: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2”
op_type: “TopK”
attribute {
name: “sorted”
i: 1
type: INT

[12/09/2021-16:53:41] [E] [TRT] ModelImporter.cpp:723: — End node —
[12/09/2021-16:53:41] [E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4292 In function importTopK:
[8] Assertion failed: ( && “This version of TensorRT only supports input K as an initializer.”
[12/09/2021-16:53:41] [E] Failed to parse onnx file
[12/09/2021-16:53:41] [I] Finish parsing network model
[12/09/2021-16:53:41] [E] Parsing model failed
[12/09/2021-16:53:41] [E] Engine creation failed

Appreciate any help
model111.onnx (12.6 MB)

I moved this topic to the TensorRT sub-category to help get attention to this question.

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging