Platform used: AGX Xavier
Jetpack: 5.0.2
L4T: R35 (release), REVISION: 1.0, GCID: 31346300, BOARD: t186ref, EABI: aarch64, DATE: Thu Aug 25 18:41:45 UTC 2022
Base container I’m using : nvcr.io/nvidia/deepstream-l4t:6.1.1-base
Problem:
I am trying to run a pipeline from a Xavier that will not have a native display attached to it (currently I’m using ssh with X session forwarding for development)
I have a modified version of ds-example-1 that sends the output to a fakesink
However I would like to get the output via ssh -X
while I’m working on it. but I am having problems when trying to get the output turn over via the SSH link.
To understand what is happening I turned off the output (by using a fakesink) but still kept the DISPLAY
variable to my remote display. (it is working fine, I’ve tested with xeyes
and other gstreamer pipelines, also everything is working fine when I set the DISPLAY
variable to the value that points to the EGL Display)
With the Display off I still see.
No EGL Display
nvbufsurftransform: Could not get EGL display connection
This probably means that there is some other element that relies on the DISPLAY
variable or the fact that it is natively connected to the Jetson.
I did a very basic test by adding some comments to get a hint.
my error output is
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
No EGL Display
nvbufsurftransform: Could not get EGL display connection
Creating StreamMux
Creating nvinfer
Creating nvvideoconvert
Creating nvdsosd
Creating queue
Creating fakesink
Playing file ../../samples/streams/sample_720p.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
nvbuf_utils: Could not get EGL display connection
nvbuf_utils: Could not get EGL display connection
Opening in BLOCKING MODE
0:00:00.394424741 103453 0x20f89210 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:03.455796829 103453 0x20f89210 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/root/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:03.493828920 103453 0x20f89210 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /root/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
nvbufsurface: eglGetDisplay failed with error 0x3000
nvbufsurface: Can't get EGL display
0:00:03.510444331 103453 0x20f89210 WARN nvinfer gstnvinfer.cpp:943:gst_nvinfer_start:<primary-inference> error: Failed to set buffer pool to active
Error: gst-resource-error-quark: Failed to set buffer pool to active (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(943): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
root@nvidia-desktop:~/jetson-deepstream/scripts# gst-inspect-1.0 nvstreammux
No EGL Display
nvbufsurftransform: Could not get EGL display connection
I think the lines (not super confidant though)
No EGL Display
nvbufsurftransform: Could not get EGL display connection
are due to the lines below (I’ve attached the full script too)
# Use nvdec_h264 for hardware accelerated decode on GPU
print("Creating Decoder \n")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
if not decoder:
sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")
print("Creating StreamMux \n")
# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
sys.stderr.write(" Unable to create NvStreamMux \n")
full script (10.0 KB)
I’m guessing the problem is caused by either nvv4l2decoder or nvstreammux (unless the output and the execution are way off step)
My main questions are.
- Are there other elements that require EGL Display connection other than
nveglglessink
- Is there a way to go around that (get the NN bits to use the Jetson TPU and rest stay out of display hardware and stuff related to EGL)
Cheers,
Ganindu.
Please let me know if you need further clarification on anything.