AttributeError: 'NoneType' object has no attribute 'set_property'

The following error while trying to run deepstream-test1 app
python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_1080p_h264.mp

Playing file /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_1080p_h264.mp4
Traceback (most recent call last):
File “deepstream_test_1.py”, line 258, in
sys.exit(main(sys.argv))
File “deepstream_test_1.py”, line 201, in main
pgie.set_property(‘config-file-path’, “dstest1_pgie_config.txt”)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’

The config file “dstest1_pgie_config.txt” should be in the same directory as deepstream_test_1.py, this is the default status after cloning the deepstream_python_apps.
And test1 accepts h264 stream, you can use file sample_1080p_h264.h264 instead of sample_1080p_h264.mp4.

Similar error even dstest1_pgie_config.txt file in same folder
root@MSI:/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/deepstream-test1# python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Unable to create pgie
Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_720p.h264
Traceback (most recent call last):
File “deepstream_test_1.py”, line 258, in
sys.exit(main(sys.argv))
File “deepstream_test_1.py”, line 201, in main
pgie.set_property(‘config-file-path’, “dstest1_pgie_config.txt”)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’
root@MSI:/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/deepstream-test1# la
README deepstream_test_1.py dstest1_pgie_config.txt
root@MSI:/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/deepstream-test1#

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used


Are you running in docker?
It seems to be a problem with your environment.
First, you can install deepstream step by step by referring to the link below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#jetson-setup
Second, you should install the python binding step by step by referring to the link below:

Please make sure the version of DeepStream when you install it.

Hardware platform : Nvidia A2
Deepstream version: 6.2
TensorRT version:8.5.2.2
Driver Version: 525.105.17
CUDA Version: 12.0

Now I have reinstalled everything on Ubuntu 20.04 now getting the following error
alluvium@alluvium-ESC4000A-E10:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
0:00:00.320159341 46292 0x42ee0c0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.899001023 46292 0x42ee0c0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.012810418 46292 0x42ee0c0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.012860765 46292 0x42ee0c0 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1459 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:00:31.291998164 46292 0x42ee0c0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:31.410980527 46292 0x42ee0c0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number=0 Number of Objects=12 Vehicle_count=8 Person_count=4
0:00:31.739648733 46292 0x36df2a0 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:31.739675334 46292 0x36df2a0 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2369): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Frame Number=2 Number of Objects=11 Vehicle_count=7 Person_count=4
nvstreammux: Successfully handled EOS for source_id=0
Frame Number=3 Number of Objects=13 Vehicle_count=8 Person_count=5
Frame Number=4 Number of Objects=12 Vehicle_count=8 Person_count=4
Frame Number=5 Number of Objects=12 Vehicle_count=8 Person_count=4
Frame Number=6 Number of Objects=11 Vehicle_count=7 Person_count=4

The error number 219 means:

CUDA_ERROR_INVALID_GRAPHICS_CONTEXT = 219
This indicates an error with OpenGL or DirectX context.

Do you have a monitor ? nveglglessink may not work without physical monitor.

If you can’t plug in a monitor, use fakesink replace nveglglessink.

There is a similar issue to refer

Thanks

1 Like

I have connected the monitor but still produces the same error, does a monitor connected via a VGA cable make a difference?
With fakesink the code ran successfully,all 1441 frames were processed but I am not able to see the video stream output.
Besides fakesink i have tried ximagesink and autovideosink but nothing worked

error
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number=0 Number of Objects=12 Vehicle_count=8 Person_count=4
0:00:31.732148984 20482 0x344b6a0 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:31.732163823 20482 0x344b6a0 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2369): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Frame Number=2 Number of Objects=11 Vehicle_count=7 Person_count=4
nvstreammux: Successfully handled EOS for source_id=0
Frame Number=3 Number of Objects=13 Vehicle_count=8 Person_count=5
Frame Number=4 Number of Objects=12 Vehicle_count=8 Person_count=4
Frame Number=5 Number of Objects=12 Vehicle_count=8 Person_count=4
Frame Number=6 Number of Objects=11 Vehicle_count=7 Person_count=4

1.Your gpu is compute card, not for display
you may follow this link for setting up virtual display, https://elinux.org/Deepstream/FAQ

If the VGA is from the cpu, it may not work.

2.ximagesink and autovideosink can’t use as video output.
Because DeepStream use nv hardware buffer, incompatible with ximagesink or autovideosink.

another options, If fakesink works well, you can store the output as mp4

Thanks

here is the code where I am trying to write output file but its getting stuck when first frame executed

import sys
sys.path.append(‘…/’)
import os
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3

def osd_sink_pad_buffer_probe(pad, info, u_data):
frame_number = 0
num_rects = 0

gst_buffer = info.get_buffer()
if not gst_buffer:
    print("Unable to get GstBuffer ")
    return

# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    try:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break

    # Initialize object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE: 0,
        PGIE_CLASS_ID_PERSON: 0,
        PGIE_CLASS_ID_BICYCLE: 0,
        PGIE_CLASS_ID_ROADSIGN: 0
    }
    frame_number = frame_meta.frame_num
    num_rects = frame_meta.num_obj_meta
    l_obj = frame_meta.obj_meta_list
    while l_obj is not None:
        try:
            # Casting l_obj.data to pyds.NvDsObjectMeta
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            break
        obj_counter[obj_meta.class_id] += 1
        obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.8)  # 0.8 is alpha (opacity)
        try:
            l_obj = l_obj.next
        except StopIteration:
            break

    # Acquiring a display meta object. The memory ownership remains in
    # the C code so downstream plugins can still access it. Otherwise
    # the garbage collector will claim it when this probe function exits.
    display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    display_meta.num_labels = 1
    py_nvosd_text_params = display_meta.text_params[0]
    # Setting display text to be shown on screen
    # Note that the pyds module allocates a buffer for the string, and the
    # memory will not be claimed by the garbage collector.
    # Reading the display_text field here will return the C address of the
    # allocated string. Use pyds.get_string() to get the string content.
    py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(
        frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

    # Now set the offsets where the string should appear
    py_nvosd_text_params.x_offset = 10
    py_nvosd_text_params.y_offset = 12

    # Font , font-color and font-size
    py_nvosd_text_params.font_params.font_name = "Serif"
    py_nvosd_text_params.font_params.font_size = 10
    # set(red, green, blue, alpha); set to White
    py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

    # Text background color
    py_nvosd_text_params.set_bg_clr = 1
    # set(red, green, blue, alpha); set to Black
    py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
    # Using pyds.get_string() to get display_text as string
    print(pyds.get_string(py_nvosd_text_params.display_text))
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
    try:
        l_frame = l_frame.next
    except StopIteration:
        break

return Gst.PadProbeReturn.OK

def main(args):
# Check input arguments
if len(args) != 2:
sys.stderr.write(“usage: %s \n” % args[0])
sys.exit(1)

# Standard GStreamer initialization
Gst.init(None)

# Create gstreamer elements
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")

# Source element for reading from the file
print("Creating Source \n ")
source = Gst.ElementFactory.make("filesrc", "file-source")
if not source:
    sys.stderr.write(" Unable to create Source \n")

# Since the data format in the input file is elementary h264 stream,
# we need a h264parser
print("Creating H264Parser \n")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
if not h264parser:
    sys.stderr.write(" Unable to create h264 parser \n")

# Use nvdec_h264 for hardware accelerated decode on GPU
print("Creating Decoder \n")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
if not decoder:
    sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")

# Use nvinfer to run inferencing on decoder's output,
# behaviour of inferencing is set through config file
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")

# Use convertor to convert from NV12 to RGBA as required by nvosd
print("Creating Convertor \n")
convertor = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not convertor:
    sys.stderr.write(" Unable to create nvvideoconvert \n")

# Create OSD to draw on the converted RGBA buffer
print("Creating OSD \n")
osd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not osd:
    sys.stderr.write(" Unable to create nvdsosd \n")

**# Create H.264 encoder**
print("Creating Encoder \n")
encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
if not encoder:
    sys.stderr.write(" Unable to create nvv4l2h264enc \n")

**# Create filesink to save the output video**
print("Creating File Sink \n")
filesink = Gst.ElementFactory.make("filesink", "file-sink")
if not filesink:
    sys.stderr.write(" Unable to create filesink \n")

**# Set properties of filesink**
filesink.set_property("location", "/home/file.mp4")

# Finally render the osd output
if is_aarch64():
    print("Creating nv3dsink \n")
    sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
    if not sink:
        sys.stderr.write(" Unable to create nv3dsink \n")
else:
    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

print("Playing file %s " % args[1])
source.set_property('location', args[1])
if os.environ.get('USE_NEW_NVSTREAMMUX') != 'yes':  # Only set these properties if not using new gst-nvstreammux
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batched-push-timeout', 4000000)

streammux.set_property('batch-size', 1)
pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

print("Adding elements to Pipeline \n")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(convertor)
pipeline.add(osd)
pipeline.add(encoder)
**pipeline.add(filesink)**
pipeline.add(sink)

# Link the elements together
# file-source -> h264-parser -> nvh264-decoder ->
# nvinfer -> nvvidconv -> nvosd -> encoder -> filesink
print("Linking elements in the Pipeline \n")
source.link(h264parser)
h264parser.link(decoder)

sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
    sys.stderr.write(" Unable to get the sink pad of streammux \n")
srcpad = decoder.get_static_pad("src")
if not srcpad:
    sys.stderr.write(" Unable to get source pad of decoder \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(convertor)
convertor.link(osd)
osd.link(encoder)
**encoder.link(filesink)**
osd.link(sink)

# Create an event loop and feed GStreamer bus messages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# Add a probe to get informed of the meta data generated. We add the probe to
# the sink pad of the osd element, since by that time, the buffer would have
# had all the metadata.
osdsinkpad = osd.get_static_pad("sink")
if not osdsinkpad:
    sys.stderr.write(" Unable to get sink pad of nvdsosd \n")

osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

# Start playback and listen to events
print("Starting pipeline \n")
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# Cleanup
pipeline.set_state(Gst.State.NULL)

if name == ‘main’:
sys.exit(main(sys.argv))

You can try this patch.It can run successfully on my machine.

Thanks

diff --git a/apps/deepstream-test1/deepstream_test_1.py b/apps/deepstream-test1/deepstream_test_1.py
index a03d326..e84ede1 100755
--- a/apps/deepstream-test1/deepstream_test_1.py
+++ b/apps/deepstream-test1/deepstream_test_1.py
@@ -178,6 +178,30 @@ def main(args):
     if not nvosd:
         sys.stderr.write(" Unable to create nvosd \n")

+    # Create convert
+    nvpreencconv = Gst.ElementFactory.make("nvvideoconvert", "nvpreencconv")
+    if not nvpreencconv:
+        sys.stderr.write(" Unable to create nvpreencconv \n")
+
+    queue = Gst.ElementFactory.make("queue", "queue")
+    if not queue:
+        sys.stderr.write(" Unable to create queue \n")
+
+    # Create H.264 encoder
+    print("Creating Encoder \n")
+    encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
+    if not encoder:
+        sys.stderr.write(" Unable to create nvv4l2h264enc \n")
+
+    # Create filesink to save the output video
+    print("Creating File Sink \n")
+    filesink = Gst.ElementFactory.make("filesink", "file-sink")
+    if not filesink:
+        sys.stderr.write(" Unable to create filesink \n")
+
+    # Set properties of filesink
+    filesink.set_property("location", "/home/nvtse/file.h264")
+
     # Finally render the osd output
     if is_aarch64():
         print("Creating nv3dsink \n")
@@ -208,7 +232,11 @@ def main(args):
     pipeline.add(pgie)
     pipeline.add(nvvidconv)
     pipeline.add(nvosd)
-    pipeline.add(sink)
+    pipeline.add(nvpreencconv)
+    pipeline.add(queue)
+    pipeline.add(encoder)
+    pipeline.add(filesink)
+    # pipeline.add(sink)

     # we link the elements together
     # file-source -> h264-parser -> nvh264-decoder ->
@@ -227,7 +255,10 @@ def main(args):
     streammux.link(pgie)
     pgie.link(nvvidconv)
     nvvidconv.link(nvosd)
-    nvosd.link(sink)
+    nvosd.link(nvpreencconv)
+    nvpreencconv.link(queue)
+    queue.link(encoder)
+    encoder.link(filesink)

     # create an event loop and feed gstreamer bus mesages to it
     loop = GLib.MainLoop()
1 Like

worked like charm Thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.