I’m currently working on an object detection problem to detect garbage bags using a Jetson nano and a webcam. I previously trained a custom model using MobileNet-V2 320x320. I exported it as both saved model and tflite format. However I haven’t been able to import it to DetectNet even though I converted it to ONNX format using tf2onnx repository.
When I pass the path to the model to DetectNet I get this error:
[TRT] ModelImporter.cpp:773: While parsing node number 265 [TopK → “TopK__875:0”]:
[TRT] ModelImporter.cpp:774: — Begin node —
[TRT] ModelImporter.cpp:775: input: “GatherND__867:0”
[TRT] ModelImporter.cpp:776: — End node —
[TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4519 In function importTopK:
 Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
[TRT] failed to parse ONNX model ‘/home/colbits/model/onnx_model/model.onnx’
[TRT] device GPU, failed to load /home/colbits/model/onnx_model/model.onnx
[TRT] detectNet – failed to initialize.
I tried this solution but then I get this error by running the create_onnx.py:
AttributeError: ‘NoneType’ object has no attribute ‘op’
Can you help with this?