Tensor RT detection issue

Actually I had developed yolov8 object detection code and I converted it to best.trt like tensor rt engine file and I used it for detection but I did not get any detections but in the original yolov8 model it is working fine so what could be the issue

Hi,

Do you feed the same input source to the model?
How do you test it? Do you use Ultralytics source?

Thanks.

I feed the input from the CSI camera connected to jetson orin nano and it can also be fed through image from system

What’s the command you used for export and inference?

1.From .pt file to .onnx file
yolo export model=best1.pt format=onnx opset=12 imgsz=640 dynamic=false
2. From .onnx to .trt file
/usr/src/tensorrt/bin/trtexec --onnx=best.onnx --saveEngine=best1.trt --fp16=false --int8=false

I want suggestions on how to move forward to clear those issues

Hi,

After you convert the model into TensorRT, how do you feed the CSI camera input to the engine?
Usually, this issue is caused by the different preprocessing (ex. RGB vs. BGR or NCHW vs. NHWC) when switching to TensorRT.

Could you share more info about how you feed the input into the TensorRT model?
Thanks.

The output from the CSI camera is NV12 format and i used nvvidconv and videoconvert with gstreamer pipeline to convert the NV12 to BGRx and then to BGR

Hi
Can i get some help regarding this topic

Hi,

It’s recommended to dump the BGR input to see if any issue.
More, could you test your ONNX model with ONNXRuntime with the exact same input to verify the model itself first?

You can find an example in the below link:

More, just want to confirm that you are using Nano with JetPack 4, is that correct?
Thanks.

No I use Jetson orin nano with jetpack 6

Can u provide me with inbuilt commands for conversion of .pt file to .onnx and then to convert it to .engine because I have done it already and just wanted check whether there is any difference in the commands

Hi,

Please use trtexec to convert an ONNX model into TensorRT.

There are several options to save a PyTorch model into ONNX format.
In general, you can use the below builtin function:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.