Deepstream_test_1.py doesn`t work

• Jetson AGX Xavier
• DeepStream 6.0.1
• JetPack 4.6.1

I`m trying to test the example, but it doesn`t work…
i followed github document, fit version, check pyds.so in lib folder… but i can`t find what is wrong.

how can i slove this problem??

Can you run gst-inspect-1.0 nvstreammux ?

No… doesn`t work…
Could you explain that??

This means that gstreamer is unable to find the plugin named nvstreammux. So in your python code when you try to create the plugin, it is unable to create it and returns a null value instead.

Can you check

  1. gst-inspect-1.0 -b (This should list out any blacklisted plugins. Could be due to inconsistent dependencies)
  2. Clear the gstreamer cache rm -r ~/.cache/gstreamer-1.0 and try to run gst-inspect-1.0 nvstreammux again and see if it works
1 Like

It is also possible that your deepstream installation has some problem.
You can try to use docker instead.

To verify if the docker setting will work you can try this comand:

docker run  --rm --runtime nvidia nvcr.io/nvidia/deepstream-l4t:6.0.1-samples gst-inspect-1.0 nvstreammux`

NOTE: The command above will download a ~1-2 Gb docker image.
leehag1224@ubuntu:~$ gst-inspect-1.0 nvstreammux
Factory Details:
  Rank                     primary (256)
  Long-name                Stream multiplexer
  Klass                    Generic
  Description              N-to-1 pipe stream multiplexing
  Author                   NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/

Plugin Details:
  Name                     nvdsgst_multistream
  Description              NVIDIA Multistream mux/demux plugin
  Filename                 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistream.so
  Version                  6.0.1
  License                  Proprietary
  Source module            nvmultistream
  Binary package           NVIDIA Multistream Plugins
  Origin URL               http://nvidia.com/

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstNvStreamMux

Pad Templates:
  SINK template: 'sink_%u'
    Availability: On request
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { (string)NV12, (string)RGBA, (string)I420 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { (string)NV12, (string)RGBA, (string)I420 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SRC: 'src'
    Pad Template: 'src'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "nvstreammux0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  batch-size          : Maximum number of buffers in a batch
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 1024 Default: 0 
  batched-push-timeout: Timeout in microseconds to wait after the first buffer is available
			to push the batch even if the complete batch is not formed.
			Set to -1 to wait infinitely
                        flags: readable, writable
                        Integer. Range: -1 - 2147483647 Default: -1 
  width               : Width of each frame in output batched buffer. This property MUST be set.
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  height              : Height of each frame in output batched buffer. This property MUST be set.
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  enable-padding      : Maintain input aspect ratio when scaling by padding with black bands.
                        flags: readable, writable
                        Boolean. Default: false
  gpu-id              : Set GPU Device ID
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  live-source         : Boolean property to inform muxer that sources are live.
                        flags: readable, writable
                        Boolean. Default: false
  num-surfaces-per-frame: Max number of surfaces per frame
                        flags: readable, writable
                        Unsigned Integer. Range: 1 - 4 Default: 1 
  nvbuf-memory-type   : Type of NvBufSurface Memory to be allocated for output buffers
                        flags: readable, writable, changeable only in NULL or READY state
                        Enum "GstNvBufMemoryType" Default: 0, "nvbuf-mem-default"
                           (0): nvbuf-mem-default - Default memory allocated, specific to particular platform
                           (1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory
                           (2): nvbuf-mem-cuda-device - Allocate Device cuda memory
                           (3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory
                           (4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
  compute-hw          : Compute Scaling HW
                        flags: readable, writable, controllable
                        Enum "GstNvComputeHWType" Default: 0, "Default"
                           (0): Default          - Default, GPU for Tesla, VIC for Jetson
                           (1): GPU              - GPU
                           (2): VIC              - VIC
  interpolation-method: Set interpolation methods
                        flags: readable, writable, controllable
                        Enum "GstNvInterpolationMethod" Default: 1, "Bilinear"
                           (0): Nearest          - Nearest
                           (1): Bilinear         - Bilinear
                           (2): Algo-1           - GPU - Cubic, VIC - 5 Tap
                           (3): Algo-2           - GPU - Super, VIC - 10 Tap
                           (4): Algo-3           - GPU - LanzoS, VIC - Smart
                           (5): Algo-4           - GPU - Ignored, VIC - Nicest
                           (6): Default          - GPU - Nearest, VIC - Nearest
  buffer-pool-size    : Maximum number of buffers from muxer's output pool
                        flags: readable, writable
                        Unsigned Integer. Range: 4 - 1024 Default: 4 
  attach-sys-ts       : If set to TRUE, system timestamp will be attached as ntp timestamp.
			If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached.
                        flags: readable, writable
                        Boolean. Default: true
  sync-inputs         : Boolean property to force sychronization of input frames.
                        flags: readable, writable
                        Boolean. Default: false
  max-latency         : Additional latency in live mode to allow upstream to take longer to produce buffers for the current position (in nanoseconds)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  frame-num-reset-on-eos: Reset frame numbers to 0 for a source from which EOS is received (For debugging purpose only)
                        flags: readable, writable
                        Boolean. Default: false

oohh… it works! than, is the problem solved??

If you see this, means its working. You should be able to run the code.

leehag1224@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ python3 deepstream_test_1.py ../../../../samples/streams/sample_qHD.h264
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file ../../../../samples/streams/sample_qHD.h264 
Adding elements to Pipeline 

Traceback (most recent call last):
  File "deepstream_test_1.py", line 250, in <module>
    sys.exit(main(sys.argv))
  File "deepstream_test_1.py", line 208, in main
    pipeline.add(sinkpad = streammux.get_request_pad("sink_0"))
TypeError: Gst.Bin.add() got an unexpected keyword argument 'sinkpad'
leehag1224@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ 

noooooooooooooooo…
solved ‘unable to ~~’, but i got new error…

This seems weird.
Can you upload your code here please?

you mean “deepstream_test_1.py” code??

Yes. It doesn’t seem to match the deepstream_test1.py on their git deepstream_python_apps/deepstream_test_1.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

ok!

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3


def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	


def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sinkpad = streammux.get_request_pad("sink_0"))
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

here, deepstream_test1.py code
i didn`t modify it

I see. Can you change your line no 208 from

pipeline.add(sinkpad = streammux.get_request_pad("sink_0"))

To

sinkpad = streammux.get_request_pad("sink_0")
1 Like
leehag1224@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file ../../../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Traceback (most recent call last):
  File "deepstream_test_1.py", line 251, in <module>
    sys.exit(main(sys.argv))
  File "deepstream_test_1.py", line 209, in main
    sinkpad = streammux.get_request__pad("sink_0")
AttributeError: 'GstNvStreamMux' object has no attribute 'get_request__pad'

i tried this, so got this one. umm…

There is an extra underscore in this get_request__pad. It should be get_request_pad

oohhh… i missed it…

so i modify it and try

leehag1224@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file ../../../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 


(python3:18010): GStreamer-WARNING **: 15:35:55.655: Trying to link elements onscreendisplay and nvegl-transform that don't share a common ancestor: nvegl-transform hasn't been added to a bin or pipeline, and onscreendisplay is in pipeline0

(python3:18010): GStreamer-WARNING **: 15:35:55.656: Trying to link elements onscreendisplay and nvegl-transform that don't share a common ancestor: nvegl-transform hasn't been added to a bin or pipeline, and onscreendisplay is in pipeline0
Starting pipeline 

Opening in BLOCKING MODE 
0:00:00.540843938 18010     0x1fdd8c70 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.999987020 18010     0x1fdd8c70 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.019470284 18010     0x1fdd8c70 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.019560367 18010     0x1fdd8c70 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:00:48.455985704 18010     0x1fdd8c70 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1942> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:48.621075685 18010     0x1fdd8c70 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:file-source:
streaming stopped, reason not-linked (-1)

got this one. better than before, but still the example doesn`t work

Trying to link elements onscreendisplay and nvegl-transform that don't share a common ancestor: nvegl-transform hasn't been added to a bin or pipeline, and onscreendisplay is in pipeline0

This should not appear.
Can you change your if at line 184 to

if is_aarch64():
    pipeline.add(transform)
    nvosd.link(transform)
    transform.link(sink)

there are one thing i want to ask

leehag1224@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ nvcc --version
bash: nvcc: command not found

if i install jetpack, deepstream properly, the version should come out, right?

nvcc is not the deepstream version. For nvcc to work you will need to install cuda-toolkit.
For Jetpack 4.6.1 CUDA version should be 10.2 I think

i see

i change the code if is_aarch64():~

leehag1224@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Traceback (most recent call last):
  File "deepstream_test_1.py", line 255, in <module>
    sys.exit(main(sys.argv))
  File "deepstream_test_1.py", line 186, in main
    pipeline.add(transform)
NameError: name 'transform' is not defined

and NameError: name ‘transform’ is not defined