GPU Context Switching Issue in DeepStream with CuPy Post-Processing

Please find the configurations that I use.

• x86/orin
• DeepStream 7
• JetPack Version (5.1.2)
• NA
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions)

Hi all

I am working on a project involving DeepStream’s deepstream_imagedata-multistream_cupy application, which builds on top of the deepstream-imagedata-multistream sample. This application allows for GPU-based image buffer access using CuPy arrays and supports multistream sources with uridecodebin.

I have introduced some GPU-heavy computations to simulate post-processing in the same thread as the DeepStream pipeline. Specifically, I added the following lines at line 125 to mimic the behavior:

A = cp.zeros((1000, 1000))
A + A

When this computation is executed, both video smoothness and AI inference performance degrade significantly.

Observations:

  1. Running the same computationally expensive task in a separate process (launched from a new command line) does not affect the video performance or AI inference.
  2. Attempting to perform the GPU computation:
  • Inside a probe function: Causes the same issue.
  • In a separate thread: Also causes degradation, similar to running it in the main pipeline.

This suggests that the issue lies in how the GPU tasks share resources, and I suspect it might be due to a lack of proper context switching or resource isolation.

Questions:

  1. Are there any CUDA primitives or configurations that can help separate GPU computations (e.g., post-processing) from the DeepStream pipeline tasks to avoid such interference?
  2. Is it possible to achieve this without offloading the computations to a separate process? If so, how?
  3. What are the recommended practices for handling GPU-intensive post-processing in DeepStream while maintaining video smoothness and inference quality?

I would greatly appreciate any insights or suggestions to resolve this issue.

Please find the code that I touched from nvidia examples.

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys

sys.path.append('../')
import gi

gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from ctypes import *
import sys
import math
from common.platform_info import PlatformInfo
from common.bus_call import bus_call
from common.FPS import PERF_DATA
import pyds
import argparse

import ctypes
import cupy as cp

import time

perf_data = None

MAX_DISPLAY_LEN = 64
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
MUXER_OUTPUT_WIDTH = 1920
MUXER_OUTPUT_HEIGHT = 1080
MUXER_BATCH_TIMEOUT_USEC = 33000
TILED_OUTPUT_WIDTH = 1920
TILED_OUTPUT_HEIGHT = 1080
GST_CAPS_FEATURES_NVMM = "memory:NVMM"
pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]



# tiler_sink_pad_buffer_probe  will extract metadata received on tiler src pad
# and modify the frame buffer using cupy
def tiler_sink_pad_buffer_probe(pad, info, u_data):
    frame_number = 0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        frame_number = frame_meta.frame_num
        l_obj=frame_meta.obj_meta_list
        num_rects = frame_meta.num_obj_meta
        obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
        }
        while l_obj is not None:
            try: 
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break
        # Create dummy owner object to keep memory for the image array alive
        owner = None
        # Getting Image data using nvbufsurface
        # the input should be address of buffer and batch_id
        # Retrieve dtype, shape of the array, strides, pointer to the GPU buffer, and size of the allocated memory
        data_type, shape, strides, dataptr, size = pyds.get_nvds_buf_surface_gpu(hash(gst_buffer), frame_meta.batch_id)
        # dataptr is of type PyCapsule -> Use ctypes to retrieve the pointer as an int to pass into cupy
        ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p
        ctypes.pythonapi.PyCapsule_GetPointer.argtypes = [ctypes.py_object, ctypes.c_char_p]
        # Get pointer to buffer and create UnownedMemory object from the gpu buffer
        c_data_ptr = ctypes.pythonapi.PyCapsule_GetPointer(dataptr, None)
        unownedmem = cp.cuda.UnownedMemory(c_data_ptr, size, owner)
        # Create MemoryPointer object from unownedmem, at index 0
        memptr = cp.cuda.MemoryPointer(unownedmem, 0)
        # Create cupy array to access the image data. This array is in GPU buffer
        n_frame_gpu = cp.ndarray(shape=shape, dtype=data_type, memptr=memptr, strides=strides, order='C')
        # Initialize cuda.stream object for stream synchronization
        stream = cp.cuda.stream.Stream(null=True) # Use null stream to prevent other cuda applications from making illegal memory access of buffer
        # Modify the red channel to add blue tint to image
        with stream:
            # n_frame_gpu[:, :, 0] = 0.5 * n_frame_gpu[:, :, 0] + 0.5
            #==============================new addition======================================
            t0 = time.time()
            while time.time()-t0 < 0.02: # do some heavy computation for 20 ms
                for i in range(500):
                    A=cp.zeros((1000,1000))
                    A+A
                # time.sleep(0.001)   # comment above two lines and uncomment this to see improved performance
            #=================================end============================================
        stream.synchronize()

        print("Frame Number=", frame_number, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])
        # Get frame rate through this probe
        stream_index = "stream{0}".format(frame_meta.pad_index)
        global perf_data
        perf_data.update_fps(stream_index)
        try:
            l_frame = l_frame.next
        except StopIteration:
            break

    return Gst.PadProbeReturn.OK


def cb_newpad(decodebin, decoder_src_pad, data):
    print("In cb_newpad\n")
    caps = decoder_src_pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()
    source_bin = data
    features = caps.get_features(0)

    # Need to check if the pad created by the decodebin is for video and not
    # audio.
    if (gstname.find("video") != -1):
        # Link the decodebin pad only if decodebin has picked nvidia
        # decoder plugin nvdec_*. We do this by checking if the pad caps contain
        # NVMM memory features.
        if features.contains("memory:NVMM"):
            # Get the source bin ghost pad
            bin_ghost_pad = source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")


def decodebin_child_added(child_proxy, Object, name, user_data):
    print("Decodebin child added:", name, "\n")
    if name.find("decodebin") != -1:
        Object.connect("child-added", decodebin_child_added, user_data)

    if "source" in name:
        source_element = child_proxy.get_by_name("source")
        if source_element.find_property('drop-on-latency') != None:
            Object.set_property("drop-on-latency", True)

def create_source_bin(index, uri):
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name = "source-bin-%02d" % index
    print(bin_name)
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri", uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added", cb_newpad, nbin)
    uri_decode_bin.connect("child-added", decodebin_child_added, nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

def main(args):
    global perf_data
    perf_data = PERF_DATA(len(args))
    number_sources = len(args)

    # Standard GStreamer initialization
    Gst.init(None)

    # Create gstreamer elements */
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    print("Creating streammux \n ")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    for i in range(number_sources):
        print("Creating source_bin ", i, " \n ")
        uri_name = args[i]
        if uri_name.find("rtsp://") == 0:
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname = "sink_%u" % i
        sinkpad = streammux.request_pad_simple(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)
    print("Creating Pgie \n ")
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    # Add nvvidconv1 and filter1 to convert the frames to RGBA
    # which is easier to work with in Python.
    print("Creating nvvidconv1 \n ")
    nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor1")
    if not nvvidconv1:
        sys.stderr.write(" Unable to create nvvidconv1 \n")
    print("Creating filter1 \n ")
    caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
    filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
    if not filter1:
        sys.stderr.write(" Unable to get the caps filter1 \n")
    filter1.set_property("caps", caps1)
    print("Creating tiler \n ")
    tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write(" Unable to create tiler \n")
    print("Creating nvvidconv \n ")
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")
    print("Creating nvosd \n ")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    
    if platform_info.is_platform_aarch64():
        print("Creating nv3dsink \n")
        sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
    else:
        print("Creating EGLSink \n")
        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    if is_live:
        print("Atleast one of the sources is live")
        streammux.set_property('live-source', 1)

    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', number_sources)
    streammux.set_property('batched-push-timeout', MUXER_BATCH_TIMEOUT_USEC)
    pgie.set_property('config-file-path', "dstest_imagedata_cupy_config.txt")
    pgie_batch_size = pgie.get_property("batch-size")
    if (pgie_batch_size != number_sources):
        print("WARNING: Overriding infer-config batch-size", pgie_batch_size, " with number of sources ",
              number_sources, " \n")
        pgie.set_property("batch-size", number_sources)
    tiler_rows = int(math.sqrt(number_sources))
    tiler_columns = int(math.ceil((1.0 * number_sources) / tiler_rows))
    tiler.set_property("rows", tiler_rows)
    tiler.set_property("columns", tiler_columns)
    tiler.set_property("width", TILED_OUTPUT_WIDTH)
    tiler.set_property("height", TILED_OUTPUT_HEIGHT)

    sink.set_property("sync", 0)
    sink.set_property("qos", 0)

    print("Adding elements to Pipeline \n")
    pipeline.add(pgie)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(filter1)
    pipeline.add(nvvidconv1)
    pipeline.add(nvosd)
    pipeline.add(sink)

    print("Linking elements in the Pipeline \n")
    streammux.link(pgie)
    pgie.link(nvvidconv1)
    nvvidconv1.link(filter1)
    filter1.link(tiler)
    tiler.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    tiler_sink_pad = tiler.get_static_pad("sink")
    if not tiler_sink_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)
        # perf callback function to print fps every 5 sec
        GLib.timeout_add(5000, perf_data.perf_print_callback)

    # List the sources
    print("Now playing...")
    for i, source in enumerate(args):
        print(i, ": ", source)

    print("Starting pipeline \n")
    # start play back and listed to events		
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)

def parse_args():
    parser = argparse.ArgumentParser(prog="deepstream_imagedata-multistream_cupy.py", 
                description="deepstream-imagedata-multistream-cupy takes multiple URI streams as input" \
                    " and retrieves the image buffer from GPU as a cupy array for in-place modification")
    parser.add_argument(
        "-i",
        "--input",
        help="Path to input streams",
        nargs="+",
        metavar="URIs",
        default=["a"],
        required=True,
    )

    args = parser.parse_args()
    stream_paths = args.input
    return stream_paths


if __name__ == '__main__':
    platform_info = PlatformInfo()
    if platform_info.is_integrated_gpu():
        sys.stderr.write ("\nThis app is not currently supported on integrated GPU. Exiting...\n\n\n\n")
        sys.exit(1)
    stream_paths = parse_args()
    sys.exit(main(stream_paths))

Essentially we are looking for some solutions to seperate heavy cupy computaion from video streaming to have a smooth video and analytics experience.

Note: We cannot use triton or nvinfer to do our inference. We are doing our inference using cupy extraction.

The pad probe function is a blocking callback function which will block the pipeline until the function finishes.

How did you run in a separated thread?

For CUDA primitives, please raise topic in CUDA forum Latest CUDA/CUDA Programming and Performance topics - NVIDIA Developer Forums

threading.Thread(target=self.Method,args=(),daemon=True).start()

where Method is

for 10000 times:
  A = cp.zeros((1000, 1000))
  A + A

I’ve tried the multiple threads methods, there is no influence to the main thread, the pipeline performance is fine. Please make sure the calculation thread using separated CUDA stream context.
deepstream_imagedata-multistream_cupy.py (15.1 KB)

Hi @Fiona.Chen
Thats not the case from my machine,

Please find the different versions with these lines

        with stream:
            # n_frame_gpu[:, :, 0] = 0.5 * n_frame_gpu[:, :, 0] + 0.5
            #==============================new addition======================================
            t0 = time.time()
            while time.time()-t0 < 0.02: # do some heavy computation for 20 ms
                for i in range(500):
                    A=cp.zeros((1000,1000))
                    A+A
                # time.sleep(0.001)   # comment above two lines and uncomment this to see improved performance
            #=================================end============================================
        stream.synchronize()

commented or not!!

WITH CUPY OPERATION


WITH CUPY OPERATION

Is there a way to seperate the cuda context so that streaming and expensive cupy operation doesnt interfer with each other?

Thank you for the support!!

My code can show you that it is OK to run your “A+A” calculation in another thread. The pipeline performance will not been impacted.

My sample code is using different CUDA stream context in the separated thread for calculating your “A+A” .

deepstream_imagedata-multistream_cupy.py (15.1 KB)

Hi @Fiona.Chen

Not working on my machine!!

Please see the difference
Also,

See my machine configuration!!

Please advice on what to do?

I was hopping to get something like pycuda driver context
or multiprocessing with in cupy

With my script?

yes you are right, with your script

362 and 363 lines commented off vs on!!

Hi, @ajithkumar.ak95

Your “A+A” in loop calculation is heavy GPU and GPU memory consumption operation. The python sample has FPS calculation function, you can check the FPS statistics in the log.

I compared the 3 cases in a Geforce RTX 3060 Ti machine.

Case 1: The original deepstream_imagedata-multistream_cupy sample
FPS is around 59.9
Case 2: deepstream_imagedata-multistream_cupy with “A+A” calculation in another thread which I have provided the code to you.
FPS is around 43.35
Case 3: The original deepstream_imagedata-multistream_cupy sample and another python script to calculate the “A+A” in loop. Run the two scripts in different terminals at the same time.
FPS is around 34.56

The heavy GPU loading will impact the deepstream_imagedata-multistream_cupy performance no matter the cupy calculation is in another thread or in another process. It is the hardware resource conflict but not the way you write the code.

Dear @Fiona.Chen

It is not the hardware, We can able to run the smooth video system on a parallel process.(from completely different terminal or container instance). We see GPU utilisation reaches 100%, but videos are smooth.
Which means using some kind of context separation we will be able to run smooth video and heavy calculations.

Please let me know how to do it?
how to make it work as smooth as possible , how to debug and monitor these conflicts , how to detect bottlenecks

With my testing, even put the cupy calculation in another process, the deepstream_imagedata-multistream_cupy may be not smooth at some moment.

We use pycuda context to switch between multiple contexts.
And interference is low, clearly an improvement.
We need a way to send cupy array to a different process so that it will produce no stuttering to the playing video, I am okey to sacrifice the time taken for the heavy computaion, but video quality should not be affected. Do you know how to do it?

The cupy array sharing IPC is not supported.

There is only NvBufSurface sharing IPC with Jetson platform only(no python). Please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-ipc-test in your Jetson device.

1 Like

https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.runtime.ipcOpenMemHandle.html

What about this??

You may consult in the CUDA forum. Latest CUDA/CUDA Programming and Performance topics - NVIDIA Developer Forums

CUDA C++ Programming Guide