Object detection in rtsp live stream integration with kafka problem

Callback function for deep-copying an NvDsEventMsgMeta struct
def meta_copy_func(data,user_data):
# Cast data to pyds.NvDsUserMeta
# Cast src_meta_data to pyds.NvDsEventMsgMeta
# Duplicate the memory contents of srcmeta to dstmeta
# First use pyds.get_ptr() to get the C address of srcmeta, then
# use pyds.memdup() to allocate dstmeta and copy srcmeta into it.
# pyds.memdup returns C address of the allocated duplicate.
dstmeta_ptr=pyds.memdup(pyds.get_ptr(srcmeta), sys.getsizeof(pyds.NvDsEventMsgMeta))

# Cast the duplicated memory to pyds.NvDsEventMsgMeta

# Duplicate contents of ts field. Note that reading srcmeat.ts
# returns its C address. This allows to memory operations to be
# performed on it.
dstmeta.ts=pyds.memdup(srcmeta.ts, MAX_TIME_STAMP_LEN+1)

# Copy the sensorStr. This field is a string property.
# The getter (read) returns its C address. The setter (write)
# takes string as input, allocates a string buffer and copies
# the input string into it.
# pyds.get_string() takes C address of a string and returns
# the reference to a string object and the assignment inside the binder copies content.

    dstmeta.objSignature.size = srcmeta.objSignature.size;

        srcobj = pyds.NvDsVehicleObject.cast(srcmeta.extMsg);
        obj = pyds.alloc_nvds_vehicle_object();
        obj.license = pyds.get_string(srcobj.license)
        obj.region = pyds.get_string(srcobj.region)
        dstmeta.extMsg = obj;
        dstmeta.extMsgSize = sys.getsizeof(pyds.NvDsVehicleObject)
        srcobj = pyds.NvDsPersonObject.cast(srcmeta.extMsg);
        obj = pyds.alloc_nvds_person_object()
        obj.age = srcobj.age
        obj.gender = pyds.get_string(srcobj.gender);
        obj.cap = pyds.get_string(srcobj.cap)
        obj.hair = pyds.get_string(srcobj.hair)
        obj.apparel = pyds.get_string(srcobj.apparel);
        dstmeta.extMsg = obj;
        dstmeta.extMsgSize = sys.getsizeof(pyds.NvDsVehicleObject);

return dstmeta


Setting callbacks in the event msg meta. The bindings layer

                # will wrap these callables in C functions. Currently only one
                # set of callbacks is supported.
                pyds.set_user_copyfunc(user_event_meta, meta_copy_func)
                pyds.set_user_releasefunc(user_event_meta, meta_free_func)
                pyds.nvds_add_user_meta_to_frame(frame_meta, user_event_meta)
                print("Error in attaching event meta to buffer\n")

I am trying to send event meta data to kafka But I am getting segmentation fault after running for some time can anyone help?After debbuging i got this error

Frame Number = 2578 Vehicle Count = 0 Person Count = 1
Frame Number = 2579 Vehicle Count = 0 Person Count = 1
Frame Number = 2580 Vehicle Count = 0 Person Count = 1
Frame Number = 2581 Vehicle Count = 0 Person Count = 1
Fatal Python error: Segmentation fault

Current thread 0x00007f0bde477700 (most recent call first):
File “kafka_deep4.py”, line 73 in meta_copy_func

Thread 0x00007f0c4e4dd740 (most recent call first):
File “/usr/lib/python3/dist-packages/gi/overrides/GLib.py”, line 585 in run
File “kafka_deep4.py”, line 598 in main
File “kafka_deep4.py”, line 661 in
Segmentation fault

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform :Jetson
deepstream version: 5.0
jetpack version :7.6.5
TensorRT Version: tensorrt ver-7.0.0-1+cuda10.2
issue type:segmentation fault after running for some frames
used rtsp out sample app to integrate with kafka sends message to kafka when some object is detected in livestream cctv cameras.
used import faulthandler; faulthandler.enable() to find error

Can you post the “kafka_deep4.py” for reproducing the problem?

I resolved the segmentation fault but now I am not able to stream output stream to vlc media player attaching kafka_deep4.py
kafka_deep4.py (25.1 KB)
was accessing through rtsp://ip address:8554/ds-test

Can you upload the “deepstream_app_config.txt” in your code?

attaching config files here:
dstest4_pgie_config.txt (3.4 KB)
msgconv_config.txt (1.9 KB)

The code looks fine. Can you run the sample of deepstream_test1_rtsp_out.py in deepstream_python_apps/apps/deepstream-test1-rtsp-out at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub? Can it work in your board?

I notice that in your code , there are lines:

    if not tee_msg_pad or not tee_render_pad:
        sys.stderr.write("Unable to get request pads\n")
   # sink_pad=queue2.get_static_pad("sink")

The tee element connect two src pads to the same element. Please make sure your code is correct.

deepstream-test1-rtsp-out application is running but streaming output is not visible in vlc media player. vlc is giving error as follows:
Connection failed:
VLC could not connect to “ip address:8554”.
Your input can’t be opened:
VLC is unable to open the MRL ‘rtsp://ip address:8554/ds-test’. Check the log for details.

Are you checking with local vlc player or from another machine?
Please check your network status.

I run the application on remote machine and access it my local machine in vlc media player
deepstream-test1-rtsp-out is running but output stream is not visible in vlc.
Error shown in vlc media player:
Connection failed:
VLC could not connect to “”.
Your input can’t be opened:
VLC is unable to open the MRL ‘rtsp://’. Check the log for details.

Please check your network status.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.