Yolov7 model run stuck on deepstream by 640*640 inputs of Orin NX

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)Jetson Orin NX
**• DeepStream Version 7.0
**• JetPack Version (valid for Jetson only) 6.0
**• TensorRT Version 8.6.2.3
**• NVIDIA GPU Driver Version (valid for GPU only) GPU
**• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I use yolov7 engine on deepstream in Orin NX board.
I use tensorrt to generate engine from onnx file
And I test different image size 512512 and 640640
I find that when I use input 640640 my deepstream pipeline works not good , the video is stuck.But when I use 512512 input it in working order.
I use jtop to check gpu and memory, they all not full load.
Here is my test pipeline:

gst-launch-1.0 rtspsrc latency=200 location=rtsp://*******:*****@192.168.230.11:554/h265/ch1/main/av_stream drop-on-latency=1 ! rtph265depay ! nvv4l2decoder enable-max-performance=1
! videorate ! “video/x-raw(memory:NVMM), framerate=(fraction)25/1”
! mux.sink_0 nvstreammux name=mux batch_size=1 width=1920 height=1080 batched-push-timeout=400 live-source=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo/config_infer_primary_TX_Person.txt batch-size=8 unique-id=1 ! nvtracker name=tracker1 ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so tracker-width=640 tracker-height=384 user-meta-pool-size=512
! nvstreamdemux name=demux demux.src_0
! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=RGBA’ ! nvdsosd process-mode=2 ! nv3dsink

My yolov7 export by this command

python export.py --weights best.pt --grid --end2end --simplify --device cpu --batch-size 8 --img-size 640 640

Can somebody explain this phenomenon or have some experimental result about yolov7 run on Orin NX by deepstream pipeline

This should be a problem with your export script

I tested this repository , it works fine.

python3 export_yoloV7.py -w ../yolov7.pt --dynamic --size 640 640

I recommend you to use the official yolov7

I have already find the reason, it is because I use process-mode=2 in nvdsosd plugin ,
if I take it to 0 or 1 ,it works good
And I also find that , if I use nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=NV12’ before nvdsosd to convert format to NV12 and use process-mode=1 to nvdsosd plugin, it seems that the pipeline work bad with frame loss.
Do you have some suggestion about using nvdsosd plugin?

You get this information after gst-inspect-1.0 nvdsosd.

  process-mode        : Rect and text draw process mode, CPU_MODE only support RGBA format
                        flags: readable, writable, changeable only in NULL or READY state
                        Enum "GstNvDsOsdMode" Default: 1, "MODE_GPU"
                           (0): MODE_CPU         - CPU_MODE
                           (1): MODE_GPU         - GPU_MODE
                           (2): MODE_NONE        - Invalid mode. Falls back to GPU

GPU mode should be more efficient, but if you input nv12 format, nvdsosd will convert it to RGB internally, which may cause some GPU consumption. If the GPU load is high, this may cause frame drops.
It is recommended to use GPU mode and use nvvideconvert to convert to RGBA format

Could you tell me that what situation process-mode=2 use to?

It is only used for initialization and is not actually used.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.