Wanted to understand image input for Sample_ObjectDetectorTracker sample application

Please provide the following info (tick the boxes after creating this topic):
Software Version
[*] DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[*] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
[*] DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
2.1.0
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
[*] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
Based on our understanding Sample_ObjectDetectorTracker is using RGBA image as input for yolo model(inside used in above application) but when I am giving same RGBA frames to my yolo v5 model not getting the same detection results. So wanted to understand in Sample_ObjectDetectorTracker before sending it to object detection are any operation(ex. noise removal, filtering) is done on RGBA images?

Dear @hardik.jalela,
The preprocessing operations taken care by data conditioner module API DriveWorks SDK Reference: DataConditioner Interface. The output from the data conditioner is fed as input to DNN. Please check the tensorRT_model.bin.json file to know the preprocessing parameters. Also, see if the same the preprocessing steps are present in yolo v5.

1 Like

Hello @SivaRamaKrishnaNV Thanks for the quick response. Adding to further question I am using the frames obtained from m_dnnInputDevice but there I see lot of noise in the frame and even I checked mentioned json file but i doubt those parameters are used for noise removal. So I just wanted to understand how you are passing these frames to your neural network (Yolo model) or can you tell me exact funtion from where i can access the frames without noise.
2. And Can you tell me in Sample_ObjectDetectorTracker which yolo version is used for my reference ?

DriveWorks SDK Reference: DNN Interface would take the input buffer from the data conditioner and gives output buffer which needs further post post processing.
The sample uses yolo v3 model.