Order of frames within batches changing in triton inference server python backend with deepstream pipeline

• GPU (Jetson / GPU)
• DeepStream Version:7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (535.86.10)
• Issue Type( questions)

We are developing a real-time video analytics application using NVIDIA DeepStream SDK, designed to process multiple RTSP camera streams (1920x1080) from various sources. Our system dynamically manages streams using the new nvstreammux plugin and incorporates a Python-based AI processing pipeline using nvinferserver. This AI pipeline leverages CuPy and TensorFlow for high-resolution object detection. The final processed video streams are encoded and transmitted via mediamtx as RTSP outputs.
The problem is that from with in triton inference server we need ordered frames, so that channel specific rules can be applied. But then this order mapping is failed because once every few seconds certain channels goes down in the batch, with out, that channel being dropped from the mux. If its being disconnected from network, we know it and we can manage remapping of order. we use new nvstreammux with following properties.

[property]

algorithm-type=1

max-fps-control=1

overall-max-fps-n=10

overall-max-fps-d=1

overall-min-fps-n=10

overall-min-fps-d=1

max-same-source-frames=1

Also we use capsfilter to convert all the channesls framerate into 10fps, before running the pipeline.

Please let me know how to fix the order issue.

Do you mean using new nvstreammux, sometimes batch0 does not correspond to source0? is this issue a DeepStream bug? could you help to reproduce this issue based on DeepStream native sample? Thanks! BTW, new nvstreammux is opensource, you can check the code if interested.

yes, it starts with a order not based on the source_id s. It keep the order(not expected order) most of the time. But flickering happens as well. I will try to reproduce it.

Hi @fanzh
Please find the code and necessary files.
for_fanz.zip (207.7 KB)
Please use lot of rtsp files, not mp4 files



USE_NEW_NVSTREAMMUX=yes python3 deepstream_test1_rtsp_in_rtsp_out.py -i file:///mp4_ingest/1.mp4 file:///mp4_ingest/22.mp4 file:///mp4_ingest/22.mp4  file:///mp4_ingest/22.mp4 rtsp://192.168.31.4:8555/video1 -g nvinferserver

it requres cv2, and some other libs, please install it in the container.

Please let me know if you found any other issues.

Thanks for the sharing! could you elaborate on your requirement? nvstreammux collect the buffers by round-robin algo. it can’t guarantee the batch_id 0 corresponds to source_id 0 for ever. on your side, bachid and sourceid are known, there already is a mapping relation.

in our case whenever some stream doesnt have frame in the batch, we should know it. Because we have different rules for different stream. it should not mixed up.

batch_id and source_id are saved in NvDsFrameMeta. you can iterate over the NvDsFrameMeta. if the source_id can’t be find, that means the stream with source_id does not have frame in the batch.

this should be found inside the triton inference server at real time, is it possible. As of now I am sending this information using ip sockets to triton inference server python backend.

nvinfersever leverages triton to do inference. triton API TRITONSERVER_InferenceRequestAddInput and TRITONSERVER_InferenceRequestAppendInputData does not provide parameter to pass camera_id. YES, you can work around by sending messages to python backend directly.

can you please give me an example, on how to set up the DEV environment fot the same.

How did you know which steram disconnect, from Python backend? you may ask in triton forum about how to modify the code for passing the user data.

there is no ready-made sample for sending messages to python backend directly. please refer to this code for how to accesss NvDsFrameMeta.

I send order of source ids from deepstream code to the python backend, it is slow by 100s of milliseconds, but as of now, I am adjusting with it.

But some time, I get misordered frames. Thats the issue.

What about accessing nvinferserver cpp code?

nvinersrever is opensource. Triton API TRITONSERVER_InferenceRequestAppendInputData is called in TrtServerRequest::setInputs. the first pamameter TRITONSERVER_InferenceRequest of
TRITONSERVER_InferenceRequestAppendInputData has an unique id reqId. you can send source id with the corresponding reqId.