Network stops responding when I run a RTSP output application on Jetson Nano

When I run “/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out# python3 deepstream_test1_rtsp_in_rtsp_out.py -i /my/path” system freezes and stops responding until I break piping.
Last message on Terminal is "NVMEDIA_ENC: bBlitMode is set to TRUE " but it seems like it has nothing to do with issue.
What can be the cause of issue and how can I fix it?

**• Hardware Platform Jetson **
• DeepStream Version 6.0.1
• JetPack Version 4.6.1

This message is harmless, you can refer to this topic on the explanation: NvENC bBlitmode set to True on Jetson Xavier NX - Jetson & Embedded Systems / Jetson Xavier NX - NVIDIA Developer Forums

Can you share the full log when running the program? There will be RTSP stream URL when the program starts running:

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

I also uploaded a log for your reference (deepstream 6.1.1/dGPU/docker) deepstream-rtsp-in-rtsp-out.log (2.9 KB)

I directly copy pasted the terminal log after program starts running below:

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Starting pipeline

Opening in BLOCKING MODE
0:00:00.374891354 504 0x1cb526f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.829053906 504 0x1cb526f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.830178124 504 0x1cb526f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.830229479 504 0x1cb526f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
0:01:12.698975233 504 0x1cb526f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:01:12.757117785 504 0x1cb526f0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Decodebin child added: source

sys:1: Warning: g_object_get_is_valid_property: object class ‘GstUDPSrc’ has no property named ‘pt’
Decodebin child added: decodebin0

Decodebin child added: rtph264depay0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f8ab78a08 (GstCapsFeatures at 0x7ed80c5b80)>
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE

Could you help to narrow down this problem?
1.You can try to add -g nvinfer para to your command line.
2.You can change some plugin to fakesink to verify which plugin has problems.

Pipeline works fine as I understand but when it runs RTSP output server makes network unresponsive.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

You need to confirm whether it has been sending data to the server instead of unresponsive.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.