How Can I use custom yolov9 model on Jetpack 5.1.1(DeepStream 6.2) use Gstreamer detect person?

  1. I try use WongKinYiu/yolov9/export.py convert .pt to .onnx use theese command:
python export.py --weights best.pt --imgsz 640 --batch 4 --device 0 --include onnx --simplify

And output is:

  1. Try build .engine file use theese command:
    /usr/src/tensorrt/bin/trtexec --onnx=best.onnx --saveEngine=best.engine --verbose

  2. Make the libnvdsinfer_custom_impl_Yolo.so
    use theese command:
    make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo

  3. Create label.txt add person and truck

  4. Modify from ultrlytics config_infer_primary_yoloV8.txt to this file:
    config_pgie_yolo_det.txt (119 Bytes)

  5. Run Gstreamer use theese command:

gst-launch-1.0 \
    nvstreammux width=1920 height=1080 batch-size=1 live-source=1 name=mux ! \
    nvinfer config-file-path=config_pgie_yolo_det.txt ! \
    nvtracker tracker-width=640 tracker-height=480 gpu-id=0 ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! \
    nvmultistreamtiler rows=2 columns=2 width=1920 height=1080 name=tiler ! \
    nvdsosd ! \
    nvvideoconvert ! \
    autovideosink sync=false \
    v4l2src device=/dev/video0 

And Output is:

Setting pipeline to PAUSED ...
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:04.313013120 28993 0xaaaaccc52330 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/ubuntu/workspaces/deep_stream_view/DeepSteam-Custom-Yolo-Test/configs/best.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT images          3x640x640       
1   OUTPUT kFLOAT output0         6x8400          

0:00:04.482527328 28993 0xaaaaccc52330 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1841> [UID = 1]: Backend has maxBatchSize 1 whereas 4 has been requested
0:00:04.482579744 28993 0xaaaaccc52330 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2018> [UID = 1]: deserialized backend context :/home/ubuntu/workspaces/deep_stream_view/DeepSteam-Custom-Yolo-Test/configs/best.engine failed to match config params, trying rebuild
0:00:04.503584192 28993 0xaaaaccc52330 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
WARNING: [TRT]: Using PreviewFeature::kFASTER_DYNAMIC_SHAPES_0805 can help improve performance and resolve potential functional issues.
WARNING: [TRT]: Using PreviewFeature::kFASTER_DYNAMIC_SHAPES_0805 can help improve performance and resolve potential functional issues.

Then No any window display,not exit ,no anymore messenge.
How Can I Debug?
Thanks!!!

seems config_pgie_yolo_det.txt is not correct because it is not nvifner’s configuration.
from the log, the app failed to load the engine because the engine’s batch-size is inconsistent with the batch-size(4) in the cfg. please refer to this code for how to generate a batch-size 4 engine.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.