Please provide complete information as applicable to your setup.
Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) nano • DeepStream Version 7.0 • JetPack Version (valid for Jetson only) 6.0 • TensorRT Version 8.6.2.3-1+cuda12.2 • NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 540.2.0 • Issue Type( questions, new requirements, bugs)
Getting Error : AttributeError: ‘NoneType’ object has no attribute ‘set_property’
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Creating source bin
source-bin-00
Creating source_bin 1
Creating source bin
source-bin-01
Creating Pgie
Unable to create pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating H264 Encoder
Unable to create encoderTraceback (most recent call last):
File “/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py”, line 410, in
sys.exit(main(stream_path))
File “/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py”, line 255, in main
encoder.set_property(“bitrate”, bitrate)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’
It return error : Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine open error
0:00:08.680420509 10201 0xaaaaea621400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 1]: deserialize engine from file :/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine failed
0:00:10.152619768 10201 0xaaaaea621400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 1]: deserialize backend context from engine from file :/home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine failed, try rebuild
0:00:10.160009823 10201 0xaaaaea621400 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files
WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:22.596495955 10201 0xaaaaea621400 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)
this is the fatal error. could you check if resnet18_trafficcamnet.etlt and cal_trt.bin existed in /home/paymentinapp/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector
Hello Sir, thank you very much for your response. I also reinstalled the deep-stream 7.0 but error remain same.
Why this error is coming ? what is possible solution. Please find configuration file here.
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file
from the log, the app failed to read the model. could you share the result of " md5sum
/home/paymentinapp/Desktop/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt"? are you testing in docker? is there any permission issue to read the model file?
Replaced after your comments:
tlt-encoded-model=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin
from the new log, the engine was generated. there was a new issue. the app failed to run. x264enc does not support NVMM hardware memory. please remove the following code
caps.set_property(
“caps”, Gst.Caps.from_string(“video/x-raw(memory:NVMM), format=I420”)
)
to
caps.set_property(
“caps”, Gst.Caps.from_string(“video/x-raw, format=I420”)
)
please refer to this working pipeline.
gst-launch-1.0 rtspsrc location=rtsp://192.168.10.139:8554/ds-test ! decodebin ! autovideosink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Pipeline is PREROLLED …
Prerolled, waiting for progress to finish…
Progress: (connect) Connecting to rtsp://192.168.10.139:8554/ds-test
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not open resource for reading and writing.
Additional debug info:
…/gst/rtsp/gstrtspsrc.c(8130): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
Freeing pipeline …