• GPU (Jetson / GPU) • DeepStream Version:7.0 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (535.86.10) • Issue Type( questions)
We are developing a real-time video analytics application using NVIDIA DeepStream SDK, designed to process multiple RTSP camera streams (1920x1080) from various sources. Our system dynamically manages streams using the new nvstreammux plugin and incorporates a Python-based AI processing pipeline using nvinferserver. This AI pipeline leverages CuPy and TensorFlow for high-resolution object detection. The final processed video streams are encoded and transmitted via mediamtx as RTSP outputs.
The problem is that from with in triton inference server we need ordered frames, so that channel specific rules can be applied. But then this order mapping is failed because once every few seconds certain channels goes down in the batch, with out, that channel being dropped from the mux. If its being disconnected from network, we know it and we can manage remapping of order. we use new nvstreammux with following properties.
Do you mean using new nvstreammux, sometimes batch0 does not correspond to source0? is this issue a DeepStream bug? could you help to reproduce this issue based on DeepStream native sample? Thanks! BTW, new nvstreammux is opensource, you can check the code if interested.
yes, it starts with a order not based on the source_id s. It keep the order(not expected order) most of the time. But flickering happens as well. I will try to reproduce it.
Thanks for the sharing! could you elaborate on your requirement? nvstreammux collect the buffers by round-robin algo. it can’t guarantee the batch_id 0 corresponds to source_id 0 for ever. on your side, bachid and sourceid are known, there already is a mapping relation.
in our case whenever some stream doesnt have frame in the batch, we should know it. Because we have different rules for different stream. it should not mixed up.
batch_id and source_id are saved in NvDsFrameMeta. you can iterate over the NvDsFrameMeta. if the source_id can’t be find, that means the stream with source_id does not have frame in the batch.
this should be found inside the triton inference server at real time, is it possible. As of now I am sending this information using ip sockets to triton inference server python backend.
nvinfersever leverages triton to do inference. triton API TRITONSERVER_InferenceRequestAddInput and TRITONSERVER_InferenceRequestAppendInputData does not provide parameter to pass camera_id. YES, you can work around by sending messages to python backend directly.
nvinersrever is opensource. Triton API TRITONSERVER_InferenceRequestAppendInputData is called in TrtServerRequest::setInputs. the first pamameter TRITONSERVER_InferenceRequest of
TRITONSERVER_InferenceRequestAppendInputData has an unique id reqId. you can send source id with the corresponding reqId.