Problems with Gst-Launch with custom YOLO nvinfer

Hello,

I am trying to launch the custom YOLO implementation via gst-launch-1.0, so I can develop a PyQt GUI that receives frames from the pipeline. Before beginning on this quest, I need a good pipeline to start out with.

The gstreamer launch is:

gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg, width=1280, height=720, framerate=60/1 ! jpegparse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0 ! nvinfer config-file-path=/correct_path_to_config/config_infer_primary_yoloV3_tiny.txt batch-size=1 unique-id=1 ! queue ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

The gst-launch-1.0 pipeline above runs, however has the error below, and experiences a terrible framerate (not even 1fps):

gstbasesink.c(2902): gst_base_sink_is_to_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglesssink0: There may be a timestamping problem, or this computer is too slow

I know that the nvinfer model works, as I’ve run them in the deepstream sdk example for custom YOLO models, however, the frame rate is fine in the example application (24fps avg).

How can I make the gstreamer launch run at the same frame rate as the deepstream sdk?

The deepstream sdk config file, from my understanding, builds the gstreamer pipeline for you. Here is the config file that runs at a nice 24fps on the Nano:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=1280
camera-height=720
camera-fps-n=60
camera-v4l2-dev-node=0
#uri=file://…/…/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_fp32.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3_tiny.txt

[tests]
file-loop=0

Thanks in advance for any help,

Chance

Hi,
Please set sync=false in nveglglessink. There is synchronization mechanism in gstreamer frameworks. If deep learning inference takes more time for each frame in nvinfer, it may print the warning. Please disable it and try.

Hello @DaneLLL,

That does allow it to still play without the error, however, there is a pretty significant lag not seen when running the deepstream application. Do you have some proposed optimizations with the above Gstreamer pipeline?

I have briefly looked over the “Accelerated Gstreamer User Guide”, would the necessary optimizations be in there?

Thank you for the help,
Chance

Hi,
Please set the property in nvv4l2decoder:

  enable-max-performance: Set to enable max performance
                        flags: readable, writable
                        Boolean. Default: false

And you may break down the pipleine with fpsdisplaysink to see whare the bottleneck is:

$ gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg, width=1280, height=720, framerate=60/1 ! jpegparse ! nvv4l2decoder enable-max-performance=1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v
$ gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg, width=1280, height=720, framerate=60/1 ! jpegparse ! nvv4l2decoder enable-max-performance=1 ! nvinfer config-file-path=/correct_path_to_config/config_infer_primary_yoloV3_tiny.txt batch-size=1 unique-id=1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v