I have a redis queue that contains JSON payloads. These json payloads contain images coming from different cameras. This Json payload also contains some other info related to the camera like RTSP link and the source_id that I assign it with.
Then, I have a similar application to deepstream-appsrc-test, where in need-data callback (or read data function) accesses the above mentioned queue, reads the payload, reads the images, creates a gst-buffer from them and pushes in the pipeline.
My Question:- since the images are coming from different cameras, I want to make sure the deepstream pipeline handles the context of each stream exclusively. To do that, Im thinking of,
[1] When I create the gst-bufferin read_data function or need data callback, I attach custom gst-meta to it, which will determine the source id of the frame.
[2] This custom meta attached buffer goes in the pipeline.
[3] Then at the source pad of the streammux, I manually update the NvDsFrameMeta->source_id of the individual frames according to the custom meta that comes with buffer.
because I do not know how many cameras would be present.
I can do the following, during runtime, before pipeline goes to PLAYING state, I somehow figureout the number of cameras and then I create and link those many appsrc elements.
but im not sure how would I attach the need data signals? I would have to make sure a perticular appscr is taking images only from one perticular camera. this is what blocked me. do you have any idea on this ?
Also, what if I dont care about batching ?
by batching you meant the batch processing only right ?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks