Numpy appsrc with streammux and nvinfer

Hi,
I have a usecase where i have to use a numpy array input and apply semantic segmentation on it.
I tried a simple pipeline without inference where i push numpy array (converted to buffer) to the appsrc plugin passing through a converter plugin in the middle and receive the output in a sink. The simple pipeline worked. but when i added nvstreammux and nvinfer plugins, the pipeline hangs, it doesn’t really do anything or show anything after successfully loading my model config, which i am assuming means that the pipeline is stuck somewhere.

So Do you have any suggestions to solve this? is there a special conversion that needs to be added between the appsrc and the nvstreammux plugins to make them compatible?

• Hardware Platform (Jetson / GPU) : Jetson AGX
• DeepStream Version : 5.1
• JetPack Version (valid for Jetson only) : Jetpack 4.5.1
• TensorRT Version 7.1.3

nvstreammux accepted capability:

SINK template: ‘sink_%u’
Availability: On request
Capabilities:
video/x-raw(memory:NVMM)
format: { (string)NV12, (string)RGBA, (string)I420 }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]

nvinfer accepted capability:

Capabilities:
  video/x-raw(memory:NVMM)
             format: { (string)NV12, (string)RGBA }
              width: [ 1, 2147483647 ]
             height: [ 1, 2147483647 ]
          framerate: [ 0/1, 2147483647/1 ]

make sure the conversion between appsrc and nvstreammux works. for converting numpy array to rgb or rgba format, you can refer to this,
https://www.pythoninformer.com/python-libraries/numpy/numpy-and-images/

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.