How to continuously read all mp4 files in a folder and save the predicted results on disk

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla V100
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 11.1
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

[Task]
I’m a new user of deepstream, just beginning to use it. I’m trying to use deepstream as a faster toolkit to blur sensitive information (human face, license plate) in a video dataset. The dataset consists of millions of videos, each has length of 1-3 mins. I have a file that contains the local paths of the videos, and I will need a V100 GPU on the same machine to blur the sensitive information on all of them, and save the sanitized output to disk. For example, read file1.mp4 → infer/process → save file1_sanitized.mp4 on disk; read file2.mp4 → infer/process → save file2_sanitized.mp4 on disk, …

[Question]
I have modified the deepstream_python_apps_test2 so that I can read one file and inference and save it on disk; but how to apply this workflow sequentially? How can I construct the pipeline once and let it run continuously? If I have to read file → build pipeline → load the model → infer → save on disk, it has too much overhead…How to solve this problem?

Thanks!

1 Like

For how to read mp4, I think you can refer deepstream_lpr_app/deepstream_lpr_app.c at master · NVIDIA-AI-IOT/deepstream_lpr_app · GitHub

And for the pipeline you mentioned, I think you can refer the deepstream test1, you can check the source code and modify and run it to get more familiar with it