FPS dropped to 2 per second when disabling X11

Hello,

I’m using a Jetson Xavier NX without the X11 libraries and python SDK for Deepstream. With this in mind, when performing just a UriDecodeBin parsing to get video from a RTSP source, usin the nvoverlaysink to display the video to the connected monitor, i’m able to see the video but just with a 2/3 fps of performance, when using X11 EGLsink i’m able to see a near real-time processing of the images.

I was trying to use the nv3dsink, but seems to be linked to the x11, so i’m not able to use any other sink to have hardware acceleration to get the video working in headless mode in the Jetson.

Does anyone has any idea to have better performance of this issue?

Thanks for the help

• Jetson Xavier NX
• DeepStream 5.0
• Latest JetPack Version
• Issue type: Low FPS when disabling X11
• How to reproduce the issue ? Disable X11 and use local tty sessions
• RTSP camera source to NVOverlaySink

Can you give your pipeline and property settings here? How do you measure the FPS?

Hello,

Creating Pipeline
Creating source bin
create_elem nvstreammux: props={‘live-source’: 0, ‘width’: 640, ‘height’: 480, ‘batch-size’: 1, ‘batched-push-timeout’: 4000000, ‘enable-padding’: 0, ‘nvbuf-memory-type’: 0}
create_elem nvinfer: props={‘config-file-path’: ‘./EPIs/config_infer_custom_masks.txt’}
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
create_elem nvvideoconvert: props=None
create_elem nvdsosd: props=None
create_elem nvoverlaysink: props=None
create_elem nvvideoconvert: props=None
create_elem capsfilter: props={‘caps’: None}
add_comp 2-gst-Bin linkto: None
add_comp 4-gst-GstNvStreamMux linkto: 2-gst-Bin
add_comp 6-gst-GstNvInfer linkto: 4-gst-GstNvStreamMux
add_comp 8-gst-Gstnvvideoconvert linkto: 6-gst-GstNvInfer
add_comp 9-gst-GstNvDsOsd linkto: 8-gst-Gstnvvideoconvert
add_comp 11-gst-Gstnvvideoconvert linkto: 9-gst-GstNvDsOsd
add_comp 12-gst-GstCapsFilter linkto: 11-gst-Gstnvvideoconvert
add_comp 13-gst-GstNvOverlaySink-nvoverlaysink linkto: 12-gst-GstCapsFilter
link 2-gst-Bin → 4-gst-GstNvStreamMux
link 4-gst-GstNvStreamMux → 6-gst-GstNvInfer
link 6-gst-GstNvInfer → 8-gst-Gstnvvideoconvert
link 8-gst-Gstnvvideoconvert → 9-gst-GstNvDsOsd
link 9-gst-GstNvDsOsd → 11-gst-Gstnvvideoconvert
link 11-gst-Gstnvvideoconvert → 12-gst-GstCapsFilter
link 12-gst-GstCapsFilter → 13-gst-GstNvOverlaySink-nvoverlaysink
Starting pipeline

This is the full pipeline before starting to do the inferences for the frames. To measure the FPS i have a counter in the OSD to know on which frame am I.

Which settings do you needs exactly? The config for the inference?

Are you using deepstream-app or your own app? If it is deepstream-app, please give us the deepstream-app config file. If it is your own app, please give us the whole pipeline such as the attached picture.

and which properties you have set to the nvoverlaysink and nveglglessink.

I’m using the python bindings, so everything is written in python. The only thing i can give you is the configuration of the inference. The rest of the bricks from the pipeline are using default values given by the system (as written in the SDK documentation)

Python binding is OK. You can also get the pipeline with gstreamer tool and it is important for us to know your setting to the plugins. Isn’t there any codes like “xxx.set_property()” in your python script?

I have some data here. For the rest of the pipeline, the urldecodebin element is creating some other plugins and capsfilters that are not available in code, but created on demand when the pipeline is starting.

//Creation of nvstreammux element
streammux = GstElementFactory.element(“nvstreammux”,
{
“live-source” : live,
“width” : 1280,
“height” : 720,
“batch-size” : 1,
“batched-push-timeout” : 4000000,
“enable-padding” : 0,
“nvbuf-memory-type” : 0
}
)
// Creation of nvoverlaysink element
overlay = GstElementFactory.element(“nvoverlaysink”)
overlay.set_property(‘sync’, False)

nvvidconv = GstElementFactory.element(“nvvideoconvert”)
capsFilter = GstElementFactory.capsFilter(
“video/x-raw(memory:NVMM), format=I420, width=1280 height=720 framerate=30/1”)

// Creation of nvosd element
nvvidconv = GstElementFactory.element(“nvvideoconvert”)
nvosd = GstElementFactory.element(“nvdsosd”)
osdsinkpad = nvosd.get_static_pad(“sink”)
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, callBack, 0)

Can I use GST_DEBUG=4 GST_DEBUG_DUMP_DOT_DIR with the python app to check the pipeline? Where I should write those variables, inside the python code or before executing it?

Hello, i’m trying to debug the pipeline as you mentioned in the first comment, but i cannot make the .dot file with the python app. Could you tell me what i should do to be able to create the file? I’m using export GST_DEBUG_DUMP_DOT_DIR=/tmp , but this is not enough to have it.

Thanks

I’ve tried the following pipeline which is just the same as your graph except that the source is a local video file. The display FPS is normal.

gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4 ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvoverlaysink

So the problem is not related with nvoverlaysink.

Can you upload your python script?