• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version Latest
I have created a pipeline where I am dynamically processing multiple videos and saving them in a new folder. Flow is something like this:
- Deepstream takes video from the server (S3)
- Saves it on local
- Process it
- Store it
- Deepstream takes new video from the server
** I m restarting the pipeline after processing one video otherwise filesink doesn’t create a video locally.
Now when I restart the pipeline it processes the new video but doesn’t actually take inference on it. This means the video is just getting passed through the pipeline without being passed through a model. The output video doesn’t contain any bounding boxes nor it produces any metadata. I don’t what can be the reason for it.
The only difference I noticed when it starts the pipeline after second video is it creates decodebin element with an incremented value like this.
source-bin-00
Decodebin child added: source
Decodebin child added: decodebin1
Decodebin child added: qtdemux1
Decodebin child added: multiqueue1
Decodebin child added: h264parse1
Decodebin child added: capsfilter1
Decodebin child added: aacparse1
Decodebin child added: avdec_aac1
Decodebin child added: nvv4l2decoder1
for first video where it works fine it is like this
source-bin-00
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: qtdemux0
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: aacparse0
Decodebin child added: avdec_aac0
Decodebin child added: nvv4l2decoder0
What can be the reason for it to not take inference on the second video?