Please provide the following info (tick the boxes after creating this topic):
Software Version
[*] DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
[*] DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
2.1.0
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
[*] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Issue Description
We have trained our yolo model for new three classes and removed existing coco classes as those were not relevant for our application now after converting it to tensor RT format while running Sample_Object_Detector_tracker app it is not able to interpret the objects, actually we have observed that when trained object is detected in video it is showing interpret output box.class is non zero value and for remaining frames it showing zero only so assuming it is detecting the object but there is issue with interpretation.
added printf statements for below code.
changes made in the code for new classes
as not able to add more than 1 image so adding zip for all 3 points mentioned above.
Sample_object_detector_tracker_query.zip (438.8 KB)
So wanted to understand that do we have to made any additional changes in application code to add our classes so detection can be made.
Note: we have check ONNX conversion and it is working fine and we are using the same TensorRT.bin.json as those preprocessing parameters are same only.