Incompatibility of Jetson Nano to save specific RTSP camera's objects in respective folder

Hardware Platform: Jetson Nano
DeepStream Version: 6.0
JetPack Version: 4.6.4
TensorRT Version: 8.6.2.3

Issue Type: Question

Issue Description: I am facing an issue on Jetson Nano when using multiple RTSP camera streams (e.g., 4, 6, 8) integrated with an object detection model. Detected objects from a specific RTSP camera stream are not being saved in the correct folder. Instead, they are being saved in folders designated for other RTSP camera streams. This issue does not occur on Orion AGX or Orion Nano, where the objects are saved correctly in their respective folders according to the RTSP camera streams.

Requirement: I need to ensure that each RTSP camera stream’s detected objects are saved in their designated folders without overlap from other camera streams on the Jetson Nano platform.

Is there a known issue with Jetson Nano in handling multiple RTSP streams in this context, or is there a specific configuration that I should be aware of to resolve this issue or can it be resolved with working on any other versions of deep stream ?

  1. do you means using the same DeepStream version, the issue only happened on Jetson Nano while the application works well on other Devices?
  2. could you share the whole media pipeline? which sample are you testing or referring to? how did you save the Detected objects?
  1. No, in Jetson Nano I used version of 6.0 and in Orion Nano and Orion AGX used version 6.4.

  2. Yes, this kind of detected image storing in their un destined folder was happening in Jetson Nano for me when I used, I have tried this with other Orion devices I did’nt face this issue.

My question was if I could update the deepstream version in jetson nano could this issue be solved or it is problem with this deepstream version or its incompatibility to process multiple RTSP cameras…

This is the used deepstream pipeline where I have used a primary and secondary model with necessary plugins ;

streammux.link(pgie)
pgie.link(tracker)
tracker.link(sgie1)
sgie1.link(nvvidconv1)
nvvidconv1.link(filter1)
filter1.link(tiler)
tiler.link(nvosd)
nvosd.link(sink)

This was when I was using Yolov8 as primary model and LPD from NGC as secondary model, even when I tried with Yolov8 as single run model without any other model combination i faced the same issue of mismatch in rtsp camera’s image getting saved in their respective folder named using rtsp camera’s id.

And also suggest me a way to save the number plate image that I get from LPD model in better quality so I could use it for text recognition ?

I don’t suggest upgrade Jetson Nano to higher DS version because the device and higher DS are incompatible. please refer to this compatibility table.
2. how did you capture the Detected objects and designate the folders? wondering the corresponding relation.
3. This code can save the whole frame. you can modify the code to save the objects. please also refer to this topic for a new solution.

Thank you for your suggestion.

This is the code :

fps_streams = {}
perf_data = None

GST_CAPS_FEATURES_NVMM = “memory:NVMM”
OSD_PROCESS_MODE = 0
OSD_DISPLAY_TEXT = 0

saved_objects = {}

allowed_labels = {0: [‘person’], 1: [‘person’, ‘car’, ‘bus’, ‘truck’, ‘motorbike’, ‘truck’, ‘bicycle’], 2: [‘person’,‘car’, ‘bus’, ‘truck’, ‘motorbike’, ‘truck’, ‘bicycle’]}

def tiler_src_pad_buffer_probe(pad, info, u_data):
global perf_data, saved_objects

gst_buffer = info.get_buffer()
if not gst_buffer:
    print("Unable to get GstBuffer")
    return Gst.PadProbeReturn.OK

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    try:
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break

    source_id = frame_meta.pad_index
    stream_index = "stream{0}".format(source_id)
    perf_data.update_fps(stream_index)

    # Get the frame buffer
    n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

    # Define the bounding box coordinates for the ROI
    x, y, w, h = 300, 200, 1400, 680  # Example coordinates
    roi_frame = n_frame[y:y+h, x:x+w]

    # Draw a rectangle on the frame
    cv2.rectangle(n_frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

    # Process objects detected in the frame
    l_obj = frame_meta.obj_meta_list
    tracker_id = None
    while l_obj is not None:
        try:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            break

        tracker_id = obj_meta.object_id
        obj_conf = obj_meta.confidence
        obj_label = obj_meta.obj_label

        # Check if the object label is in the allowed list for this camera
        if obj_label not in allowed_labels.get(source_id, []):
            try:
                l_obj = l_obj.next
            except StopIteration:
                break
            continue

        # print(dir(obj_meta))
        # Initialize the count for new tracker_id
        if tracker_id not in saved_objects:
            saved_objects[tracker_id] = 0

        # # Proceed only if we haven't saved an image for this tracker_id
        if saved_objects[tracker_id] < 1 and obj_conf > 0.75:
        
            # Extract object bounding box
            rect_params = obj_meta.rect_params
            top = max(0, int(rect_params.top))
            left = max(0, int(rect_params.left))
            width = int(rect_params.width)
            height = int(rect_params.height)
            bottom = top + height
            right = left + width

            # Check if the object is completely inside the rectangle
            if (left >= x and top >= y and right <= x + w and bottom <= y + h):
                obj_image = roi_frame[top - y:bottom - y, left - x:right - x]
                saved_objects[tracker_id] += 1

                # Create folder for the source if it doesn't exist
                folder_name = f"camera_{source_id}"
                if not os.path.exists(folder_name):
                    os.makedirs(folder_name)

                # Save the object image with timestamp and camera ID as the filename
                # timestamp = datetime.now().strftime('%Y:%m:%d_%H:%M:%S:%f')
                obj_filename = f"{folder_name}/camera_{source_id}_{tracker_id}.jpg"
                cv2.imwrite(obj_filename, obj_image)
                print(saved_objects)

        try:
            l_obj = l_obj.next
        except StopIteration:
            break

    try:
        l_frame = l_frame.next
    except StopIteration:
        break


return Gst.PadProbeReturn.OK

def cb_newpad(decodebin, decoder_src_pad, data):
print(“In cb_newpad\n”)
caps = decoder_src_pad.get_current_caps()
gststruct = caps.get_structure(0)
gstname = gststruct.get_name()
source_bin = data
features = caps.get_features(0)
print(“gstname=”, gstname)
if gstname.find(“video”) != -1:
print(“features=”, features)
if features.contains(“memory:NVMM”):
# Get the source bin ghost pad
bin_ghost_pad = source_bin.get_static_pad(“src”)
if not bin_ghost_pad.set_target(decoder_src_pad):
sys.stderr.write(“Failed to link decoder src pad to source bin ghost pad\n”)
else:
sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy, Object, name, user_data):
print(“Decodebin child added:”, name, “\n”)
if name.find(“decodebin”) != -1:
Object.connect(“child-added”, decodebin_child_added, user_data)

def create_source_bin(index, uri):
print(“Creating source bin”)
bin_name = “source-bin-%02d” % index
print(bin_name)
nbin = Gst.Bin.new(bin_name)
if not nbin:
sys.stderr.write(" Unable to create source bin \n")
uri_decode_bin = Gst.ElementFactory.make(“uridecodebin”, “uri-decode-bin”)
if not uri_decode_bin:
sys.stderr.write(" Unable to create uri decode bin \n")
uri_decode_bin.set_property(“uri”, uri)
uri_decode_bin.connect(“pad-added”, cb_newpad, nbin)
uri_decode_bin.connect(“child-added”, decodebin_child_added, nbin)
Gst.Bin.add(nbin, uri_decode_bin)
bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target(“src”, Gst.PadDirection.SRC))
if not bin_pad:
sys.stderr.write(" Failed to add ghost pad in source bin \n")
return None
return nbin

def main():
global vehicles_data, cam_config

# cam_config = json.load(open('cam_config.json', 'r'))
args = ['file:///home/jetson/Downloads/combined_models/mall_input_video.mp4',
        'file:///home/jetson/Downloads/combined_models/merged_1.mp4',
        'file:///home/jetson/Downloads/combined_models/merged_2.mp4']
        
number_sources = len(args)

global perf_data
perf_data = PERF_DATA(number_sources)

Gst.init(None)
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()
is_live = False

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")
print("Creating streamux \n ")

MUXER_OUTPUT_WIDTH = 1920
MUXER_OUTPUT_HEIGHT = 1080
MUXER_BATCH_TIMEOUT_USEC = 40000
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")
streammux.set_property('width', MUXER_OUTPUT_WIDTH)
streammux.set_property('height', MUXER_OUTPUT_HEIGHT)
streammux.set_property('batch-size', number_sources)
streammux.set_property('batched-push-timeout', MUXER_BATCH_TIMEOUT_USEC)

pipeline.add(streammux)

for i in range(number_sources):
    print("Creating source_bin ", i, " \n ")
    uri_name = args[i]
    if uri_name.find("rtsp://") == 0:
        is_live = True
    else:
        uri_name = uri_name
    print(uri_name)
    print("*******")
    source_bin = create_source_bin(i, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    pipeline.add(source_bin)
    padname = "sink_%u" % i
    sinkpad = streammux.get_request_pad(padname)
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad = source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    srcpad.link(sinkpad)

print("Creating Pgie \n ")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")
print("Creating tiler \n ")
pgie.set_property('config-file-path', "config_infer_primary_yoloV8.txt")
pgie.set_property('batch-size', len(args))
# pgie.set_property('interval', 5)

pipeline.add(pgie)
streammux.link(pgie)

tracker = Gst.ElementFactory.make("nvtracker", "tracker")
if not tracker:
    sys.stderr.write(" Unable to create tracker \n")
config = configparser.ConfigParser()
config.read('obj_tracker.txt')
config.sections()

for key in config['tracker']:
    if key == 'tracker-width':
        tracker_width = config.getint('tracker', key)
        tracker.set_property('tracker-width', tracker_width)
    if key == 'tracker-height':
        tracker_height = config.getint('tracker', key)
        tracker.set_property('tracker-height', tracker_height)
    if key == 'gpu-id':
        tracker_gpu_id = config.getint('tracker', key)
        tracker.set_property('gpu_id', tracker_gpu_id)
    if key == 'll-lib-file':
        tracker_ll_lib_file = config.get('tracker', key)
        tracker.set_property('ll-lib-file', tracker_ll_lib_file)
    if key == 'll-config-file':
        tracker_ll_config_file = config.get('tracker', key)
        tracker.set_property('ll-config-file', tracker_ll_config_file)
    if key == 'enable-batch-process':
        tracker_enable_batch_process = config.getint('tracker', key)
        tracker.set_property('enable_batch_process',
                             tracker_enable_batch_process)

pipeline.add(tracker)
pgie.link(tracker)

print("Creating nvvidconv1 \n ")
nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor1")
if not nvvidconv1:
    sys.stderr.write(" Unable to create nvvidconv1 \n")

pipeline.add(nvvidconv1)
tracker.link(nvvidconv1)

print("Creating filter1 \n ")
caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
if not filter1:
    sys.stderr.write(" Unable to get the caps filter1 \n")
filter1.set_property("caps", caps1)

pipeline.add(filter1)
nvvidconv1.link(filter1)

TILED_OUTPUT_WIDTH = 1920
TILED_OUTPUT_HEIGHT = 1080
print("Creating tiler \n ")
tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
if not tiler:
    sys.stderr.write(" Unable to create tiler \n")
tiler_rows = int(math.sqrt(number_sources))
tiler_columns = int(math.ceil((1.0 * number_sources) / tiler_rows))
tiler.set_property("rows", tiler_rows)
tiler.set_property("columns", tiler_columns)
tiler.set_property("width", TILED_OUTPUT_WIDTH)
tiler.set_property("height", TILED_OUTPUT_HEIGHT)

pipeline.add(tiler)
filter1.link(tiler)

print("Creating nvosd \n ")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")

pipeline.add(nvosd)
tiler.link(nvosd)

print("Creating EGLSink \n")
sink = Gst.ElementFactory.make("nv3dsink", "nvvideo-renderer")
if not sink:
    sys.stderr.write(" Unable to create egl sink \n")
sink.set_property("qos", 0)
sink.set_property("sync", 0)

pipeline.add(sink)
nvosd.link(sink)

if is_live:
    print("Atleast one of the sources is live")
    streammux.set_property('live-source', 1)

# create an event loop and feed gstreamer bus messages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

tiler_src_pad = tiler.get_static_pad("sink")
if not tiler_src_pad:
    sys.stderr.write(" Unable to get sink pad \n")
else:
    tiler_src_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0)
    GLib.timeout_add(5000, perf_data.perf_print_callback)

# List the sources
print("Now playing...")
for i, source in enumerate(args):
    print(i, ": ", source)

print("Starting pipeline \n")
# start play back and listen to events
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
thread_stop = True
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)

I suspect this might be due to resource limitations on the Jetson Nano, such as memory constraints or processing capabilities, especially when handling a high number of RTSP streams simultaneously. Could you confirm if this is a likely cause, or suggest any other potential reasons for this issue?

Getting data is by using frame_meta.batch_id. Destining folder is using frame_meta.pad_index. frame_meta.batch_id is not always the same with frame_meta.pad_index.the batch_id of source maybe will vary because of round-robin algorithm. please refer to the doc.

Yes frame_meta.batch_id and frame_meta.pad_index are different it is known, but the issue is that while saving the detected objects from multiple RTSP streams, they are not being stored in their designated folders created based on the RTSP source ID. This problem seems to occur specifically with the Jetson Nano, as the same code does not exhibit this mismatch in saving detected objects when used with the Jetson Orin Nano and AGX.

** Does this problem with round-robin algorithm that you say do occur only in Jetson nano as I could not see this kind of error of mismatch saving of detected objects with orion nano and AGX?

  1. noticing there are many custom code, can you use deepstream sample to reproduce this issue? deepstream_imagedata-multistream.py or deepstream_imagedata-multistream_redaction.py have this cv2.imwrite function.
  2. we fix some bugs in the latter version. for example, the fix in this line.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.