Deepstream-rtsp-in-rtsp-out : AttributeError: 'NoneType' object has no attribute 'set_property'

Please provide complete information as applicable to your setup.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2.3-1+cuda12.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 540.2.0
• Issue Type( questions, new requirements, bugs)

Getting Error : AttributeError: ‘NoneType’ object has no attribute ‘set_property’

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Follow: deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub :

Then run :

python3 deepstream_test1_rtsp_in_rtsp_out.py -i file:///home/ubuntu/Desktop/nus-2024-vision-main/video/busan_video.h264 file:///home/ubuntu/Desktop/nus-2024-vision-main/video/smoke_video.h264 -g nvinferserver

Error come :

Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating source_bin 1

Creating source bin
source-bin-01
Creating Pgie

Unable to create pgie
Creating tiler

Creating nvvidconv

Creating nvosd

Creating H264 Encoder
Unable to create encoderTraceback (most recent call last):
File “/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py”, line 410, in
sys.exit(main(stream_path))
File “/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py”, line 255, in main
encoder.set_property(“bitrate”, bitrate)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’

How to solve this error ?

what is the device model? nano does not support hardware encoding. you can use software encoding plugin x264enc instead.

Software Encoding Working :

device model : resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine

When run :

@ubuntu:~/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out$ python3 deepstream_test1_rtsp_in_rtsp_out.py -i file:///home/ubuntu/Desktop/nus-2024-vision-main/video/1.mp4 file:///home/ubuntu/Desktop/nus-2024-vision-main/video/2.mp4

Main Error : NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

It return error : Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine open error
0:00:08.680420509 10201 0xaaaaea621400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 1]: deserialize engine from file :/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine failed
0:00:10.152619768 10201 0xaaaaea621400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 1]: deserialize backend context from engine from file :/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine failed, try rebuild
0:00:10.160009823 10201 0xaaaaea621400 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files
WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:22.596495955 10201 0xaaaaea621400 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

this is the fatal error. could you check if resnet18_trafficcamnet.etlt and cal_trt.bin existed in /home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector

Yes, present : /opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector$ ls
cal_trt.bin labels.txt resnet18_trafficcamnet.etlt

tlt-encoded-model=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin

Checked : resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine is absent.

log-0903.txt (52.8 KB)

  1. without any configuration and code modification, can you reproduce this issue? I can’t reproduce this issue, here is my log log-0903.txt (52.8 KB).
  2. if you modify the configuration file, please share the modifications.

Hello Sir, thank you very much for your response. I also reinstalled the deep-stream 7.0 but error remain same.
Why this error is coming ? what is possible solution. Please find configuration file here.

dstest1_pgie_config.txt (3.0 KB)
?

Please find log file here
deepstream_log.txt (2.4 KB)

I did’nt do any modification in the file.

Just executed : python3 deepstream_test1_rtsp_in_rtsp_out.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 -g nvinfer > deepstream_log.txt 2>&1

Error : free(): double free detected in tcache 2
Aborted (core dumped)

During install I, changed code in /deepstream_python_apps/apps/common

platform_info.py.txt (3.2 KB)

This is code file :
deepstream_test1_rtsp_in_rtsp_out .py.txt (14.6 KB)

NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file
from the log, the app failed to read the model. could you share the result of " md5sum
/home/paymentinapp/Desktop/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt"? are you testing in docker? is there any permission issue to read the model file?

Sorry, how to check md5sum file ?

No, I am not testing on docker.

Original :

tlt-encoded-model=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin

Replaced after your comments:
tlt-encoded-model=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin

Error:

Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2420): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-linked (-1)
Frame Number= 1
Frame Number= 2

Log:
deepstream_log.txt (53.3 KB)

from the new log, the engine was generated. there was a new issue. the app failed to run. x264enc does not support NVMM hardware memory. please remove the following code
caps.set_property(
“caps”, Gst.Caps.from_string(“video/x-raw(memory:NVMM), format=I420”)
)
to
caps.set_property(
“caps”, Gst.Caps.from_string(“video/x-raw, format=I420”)
)
please refer to this working pipeline.

gst-launch-1.0  filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder  ! nvvideoconvert ! 'video/x-raw,format=I420' ! x264enc ! filesink location=test.264

Code is working. Thank you very much Excellent NVIDIA Engineer @fanzh.

How to visualize the content ?

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

I enter that link "rtsp://localhost:8554/ds-test "in open Network Stream in VLC player : but it return Connection failed:

VLC could not connect to “localhost:8554”.

Your input can’t be opened:

VLC is unable to open the MRL ‘rtsp://localhost:8554/ds-test’. Check the log for details.

How to see output ?

could you check if it is a network issue? can you play the rtsp on the machine which is running the deepstream?

Network is working fine. Bcs hikevision camera is opening with ip.

Played RTSP on same machine : where deepstream is runnuing.

How to solve this problem ? why this problem is coming ?

Without seeing How I work on modifying output "?

Changed to it

gst-launch-1.0 rtspsrc location=rtsp://192.168.10.139:8554/ds-test ! decodebin ! autovideosink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Pipeline is PREROLLED …
Prerolled, waiting for progress to finish…
Progress: (connect) Connecting to rtsp://192.168.10.139:8554/ds-test
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not open resource for reading and writing.
Additional debug info:
…/gst/rtsp/gstrtspsrc.c(8130): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
Freeing pipeline …

When i run :vlc -vvv rtsp://192.168.10.138:8554/ds-test

it return :

log-0903.txt (18.4 KB)

  1. is the port taken? you can use “netstat -tuln |grep 8554” to check before starting the application.
  2. if the port is taken, you can change a new rtsp port in python code. here is my test log log-.txt (2.9 KB).