Integrate YoloV8 in Sample Object Detector

Please provide the following info (tick the boxes after creating this topic):
Software Version
[*] DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[*] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
[*] DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
2.1.0
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
[*] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
I am using Yolov8m.pt file and converted into onnx on PC and then converted into tensorRT_model.bin using trtexec but facing false detection only using sample_object_detector_tracker application.
I checked pt and onnx file and observe that it works properly; only after conversion into tensortRT_model.bin it is observed that only false detection happens. The classes used are the same 80 classes as declared in sample_object_detector_tracker application (so no change required in for loop as highlighted in this link - Modification: Sample_Object_Detector_tracker- application) but still false detection happen and bounding boxes are also not drawn.

Dear @PA_GN ,
What are the preprocessing steps needed before feeding into network. You can know this from your pytorch or ONNX sample code which is considered as reference.

Hi @SivaRamaKrishnaNV,

Have shared details in DM.

@SivaRamaKrishnaNV,

Please share your inputs.

Dear @PA_GN,
The code snippet shared does not have any info on preprocessing steps performed on input image.

Hi @SivaRamaKrishnaNV,

I executed below steps till now :
a) Downloaded Yolov8 code from below link :

b) Using script from below link verified the detection on yolov8m.pt & yolov8n.pt and the default bus.jpg image available in above repo works.

c) Using pt_to_onnx.py script (attached for reference) converted the yolov8n.pt to yolov8n.onnx
pt_to_onnx.txt (227 Bytes)
Note : As unable to upload python script so renamed into txt format.
d) Using the yolov8n.onnx also detection on bus.jpg image works.
e) Now, I converted yolov8.onnx to tensorRT_model.bin using below command :
/usr/src/tensorrt/bin/trtexec --onnx=yolov8n_github.onnx --saveEngine=tensorRT_model_yolov8n_github.bin --fp16
Did not see any error in conversion (attached the logs for reference).
trtexec_build_logs_yolov8n_onnx_tensort_model_bin.txt (27.2 KB)

And now If I use this tensorRT_model.bin with sample_object_detector_tracker the camera streaming does not work due to continuous NOTIF_WARN_ICP_FRAME_DROP errors.
Can you please guide how to make yolov8 model to work with sample_object_detector_tracker? Anything else needs to be taken care before converting into onnx or tensorRT?

Dear @PA_GN ,
Can you take a sample frame image from default input video and feed it as input to the model and see if it gets detected in the object detection python code.

Also, you need to use DriveWorks SDK Reference: TensorRT Optimizer Tool

Hey @SivaRamaKrishnaNV just be clear, so while exporting pytorch models to run the target, we are supposed to use the DriveWorks tensorRT_optimization tool rather than the target’s local trtexec ?

Dear @ashwin.nanda,
Yes. You need to use tensorRT_optimization tool on target instead of trtexec

1 Like

I tried using both camera stream and input video feed for tensorRT_model.bin (ref : yolov8n.pt) converted using tensorRT_optimization tool and see detections are happening but all detections are false and no bounding boxes are drawn.

Dear @PA_GN,
I asked to try taking screen shots from the input video and feed it as input to your pytorch model(original python code). This confirms the model is good enough to detect. If not, please check changing threshold params.

If it works then pytorch → TRT- > DW integration can be tested.
To debug this, we need to confirm if the input buffer to your pytorch model and DW model are same. I believe, there must be a mismatch in preprocessing in most of the cases.

Hi @SivaRamaKrishnaNV,

I took screenshots from the input video feed and used these images for python based yolov8 code and the detections are happening as expected. I used yolov8s.pt available on link yolov8 and converted into onnx.
On converting this onnx to tensorRT_model.bin the detections prints are observed but all false and no bounding boxes are drawn.

Thanks for sharing the model via PM. I quickly replaced the original model with your model in the object detection sample and noticed the similar behaviour. Also, I don’t see much of the preprocessing happening as per python script.
Let me dig into it and update you.

Hi @SivaRamaKrishnaNV,

Thanks for the update.
Will wait for further updates from your end.