Error with gst-resource-error-quark

• Hardware Platform: Jetson Xavier NX Module
• DeepStream Version 5.0
• JetPack Version 4.4. [L4T 32.4.3]
• TensorRT Version 7.1.3.0

Error while executing Python deepstream-imagedata-multistream with 20 different streams (RTSP ip cameras). The program works well for 20m up to 2h, after that crash with this error:

Warning: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5293): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-03/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:

Unhandled return value -7.

Error: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5361): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-03/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:

Could not receive message. (System error)

Exiting app

This is my config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=20
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#0=Group Rectangles, 1=DBSCAN, 2=NMS, 3 = None(No clustering)
cluster-mode=1

[class-attrs-all]
threshold=0.2
eps=0.7
minBoxes=1

#Use the config params below for dbscan clustering mode
[class-attrs-all]
detected-min-w=4
detected-min-h=4
minBoxes=3


[class-attrs-0]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.95

[class-attrs-1]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5

[class-attrs-2]
pre-cluster-threshold=0.1
eps=0.6
dbscan-min-score=0.95

...

Edit2: After 30m of testing with 16 streams and changing the batch-size to 16 , I realize that the video I see is 10m in the past

1 Like
  1. The first error is a RTSP connection error. It has nothing to do with DeepStream. You need to debug the RTSP connection.
  2. What do you mean “the video I see is 10m in the past”?
  1. We are experience some micro cuts 1 to 4 sec in the connection with the IP Cameras, this could make the app crash?

  2. How i could debug the RTSP connection?

  3. After 20m with the app running, i connect to a RTSP camera when i see a person pass by the camera, 10m letter i see the same person pass in the Deepstream App.
    I think that this is causing the gst-resource-error-quark error

To add more information i using this Camaras:
HikVision
Models: DS-2CD2023G0-I And DS-2CD2125FWD-I

i saw a post with similar problem, but there wasn’t a concrete answer for Python app, we bout use HikVision cameras, but my cuts ar shorts.

1 Like

This sample is only a client to receive rtsp stream from “IP camera”. And this error means the rtsp client request a response from server, but server did not response. So it is better to debug RTSP server(the ip camera) to know why there is no response.
If you want to dig into the root cause, it is better to analysis the packets between sever and client according to the protocol RFC 2326: Real Time Streaming Protocol (RTSP). This issue has nothing to do with DeepStream and can not be identify by only client side. If you are using rtspsrc plugin to get the stream, one way is to set “debug” property of rtspsrc plugin as TRUE(rtspsrc), then you can analysis the information according to RFC 2326: Real Time Streaming Protocol (RTSP). Any more information, please refer to the information for rtsp protocol with internet.
If you are not using rtspsrc plugin, there are also a lot of open source rtsp analysis tools such as wireshark (Wireshark · Download) which can help to analysis the rtsp requests and response.

Thanks for the replay.

Ok, I’m gonna explain my setup to because maybe that is the problem.

I have 20 ip cameras, connected to the internet each with a different dynamic IP provided by the ISP. Every 12 h there is a forced IP change.
In my local network i have a VPN that connects the 20 camaras with a local static IP.
The Xavier is connected to the 20 cameras via the local network (using the VPN)

We suspect that each time our ISP force a ip change in the camaras, the app crash. But this does not make any sense because it should lock as a micro cut of max 5sec to the app.

Edit: After more testing we are sure that the error is when camara ip change. I create a new topic.

If someone have problem that the cameras are not showing the real time, we fixed with this:
FPS = 12
BATCHED_PUSH_TIMEOUT = 1 / FPS * 1000 * 1000

I am also getting the gst-stream-error-quark.

WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 3x34x60
0:00:51.733156863 1357 0x5b38830 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:deepstream_config.txt sucessfully
Decodebin child added: source
Warning: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5427): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
Decodebin child added: decodebin0
Decodebin child added: rtph265depay0
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
In cb_newpad
gstname= video/x-raw
Error: gst-stream-error-quark: memory type configured and i/p buffer mismatch ip_surf 0 muxer 3 (1): gstnvstreammux.c(467): gst_nvstreammux_chain (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer
Exiting app

Hi ImSaur,

Please help to open a new topic if it’s still an issue.

Thanks