DeepStream TensorRT Tensor Output Meta: Persisting bounding boxes

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ?

  • Build a custom DeepStream pipeline using Python bindings for object detection and drawing bounding boxes from tensor output meta.
  • Grab a Pytorch model of YoloV5 and optimize it with TensorRT.
  • Take the optimized model and configure the DeepStream pipeline to use Triton server and make it load the TRT YoloV5 model.
  • Run the inference pipeline.
  • When there are target objects in the video, they are detected correctly. Later when objects leave the scene, the network still outputs boxes from past detections, like ghost bounding boxes, until new valid objects come into the scene. The ghost boxes are not random, they repeat old detections.
  • The model with its TRT optimization has been tested with the Triton server alone, and it works fine. This problem is new when we introduce DeepStream.

I can’t share the model or the complete pipeline at this moment. I know that makes it difficult to reproduce.
Maybe I can work on a reduced version of the code and find a public model to test.
The model is based on:

Bear with me please, maybe you could help me identify the origin of this behavior.
If I have to make a guess, I would call it some issue with memory management between DeepStream, Triton TRT, and Yolo backend.

Any thoughts or previous reports?

config.pbtxt (535 Bytes)
triton.txt (1.1 KB)

# Maximum number of objects
# Number of parameters
# Dimensions from raw NN model
LAYER_DIMS = [ N_OBJS*N_DATA+1, 1, 1 ] 
# Dimension for actual data
# Top K
NPART = 25
# Dims
# Output Layer

def get_ObjDet(output_layer_info):  
    Parse output from object detection

    numdets = 0
    idx = []
    xyboxes = []
    classesid = []
    scores = []

    layer_bboxes = lfinder(output_layer_info, LAYER_NAME)
    if layer_bboxes is None:
        sys.stderr.write( "ERROR: layer missing in output tensors\n" )
        return numdets, xyboxes, classesid, scores

    Ptr = ctypes.cast( pyds.get_ptr(layer_bboxes.buffer), ctypes.POINTER(ctypes.c_float) )

    bboxes_flat = np.array( np.ctypeslib.as_array( Ptr, shape=LAYER_DIMS ) )

    # Convert from [6001, 1, 1] to [1000, 6]
    data = np.reshape( bboxes_flat[0:N_YOLO, 0, 0], DIMS)

    # Drop values with conf less than 0.5
    data = data[ data[:,5]>=0.50 ]

    #Yolo.     #0:ClassId     #1:Xc 2:Yc 3:W 4:H    #5:Score
    data_class = data[:, 0]
    data_conf = data[:, 5]
    data_xy = data[:, 1:5]    
    # Partition for the top NPART results
    idx = np.argpartition(data_conf, -NPART)[-NPART:]

    for i in idx:
        # Further filtering
        if data_conf[i] >= 0.75:
            numdets += 1
            xyboxes.append( [ (data_xy[i,0]-data_xy[i,2]/2.0)/W, (data_xy[i,1]-data_xy[i,3]/2.0)/H, 
                        (data_xy[i,0]+data_xy[i,2]/2.0)/W, (data_xy[i,1]+data_xy[i,3]/2.0)/H ] )
            classesid.append( data_class[i] )

    layer_bboxes = None

    return numdets, xyboxes, classesid, scores

Also, I’ve found the Yolo Trt plugin is based on this

Sorry for the late response, is this still an issue to support? Thanks

We are still debugging this. It may be related to a TensorTRT minor version change, or something like that, impacting in the lib file. So far we couldn’t find we comes from. Thanks.

do you still the support about this?

I ever met similar issue before, are you usingy your own post-processor?

Hi @mchi

Yes, we still have this issue. Although it’s difficult to tell if is a problem inside deepstream/triton, or it’s external.

  • In DeepStream we have rewritten our object detection code for other reasons. It works fine for all models except this one. Our code is in the pgie callback of DeepStream for tensor meta output post-processing in Python.
  • YoloV5 is optimized with TensorRT, which is based on the Yolo links above, one for the model, and the other for the file. We are using T-RT 8.0.1. Triton Server uses the lib.
  • It looks like somewhere in the code, when there are no active objects, old objects from the buffer are still there, valid.

After some time, I think the problem is related to some mismatch between Tensor-RT and this source code, which is outside the Nvidia domain, so that’s why I closed the topic.

Any further thoughts are welcome.

Thank you

Hi @xtianhb.glb , I am working on a similar project. Unfortunately, I can’t share the code. So far I have been able to integrate YoloV5

1 Like

Hi @mfoglio!

  1. It looks like we are using the same sources for YoloV5 and building similar pipelines (Python, NvInferServer, etc). That is great, we are on the same page.
  2. In my case the model runs almost correctly, doesn’t crash or anything. The only problem I noticed is there are old bounding boxes detected when there are no new objects. When new objects enter the image, the old ones disappear one by one.
  3. I have the feeling that this is related to a circular memory buffer, or some leak in memory as you said.
  4. There has been a bug fix on the output buffer to avoid a race condition with cuda memset, my code already has that patch:
    [Bug in YoloV5Layer implementation · Issue #720 · wang-xinyu/tensorrtx · GitHub](https://Bug in YoloV5Layer implementation)
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.