I’m facing a problem when using a YOLOv8 model that I trained on DeepStream.
I have a .pt model that I converted to .onnx, and I noticed the following behavior:
When I use only one camera, the model works normally and detects correctly.
When I increase the number of cameras in the pipeline, the model does not make predictions.
If I set batch-size=1 in nvstreammux, the model works for all cameras, but the performance is very low.
When I increase the batch-size of nvstreammux (for example, for the number of cameras), the processing is fast, but it does not detect anything in the videos.
On the other hand, when I do the same test with a pre-trained person detection model, everything works perfectly, even with multiple cameras and a high batch-size.
Therefore, I suspect that the problem is in the generation of the .onnx from my trained YOLOv8 model.
Has anyone experienced this or could tell me if there is a specific configuration in the .onnx export or in pgie that could cause this problem?
I can share the .pt, .onnx and the configs, if it helps in the analysis.
apparently this command here worked for multiple cameras still in the repo:
command:
python3 export_yoloV8.py -w /home/verissimo/DS7.1/ds_analytics/models/yolov8/yolo_best.pt --simplify --dynamicapparently this command here worked for multiple cameras still in the repo: