Pipetuner Dataset Setup

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) T4
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have few images and their corresponding annotations in txt files in [class x_center y_center width height] this format that I used for finetuning a yolov8 model, then exported to onnx and created the engine. But in all the documentation of Pipetuner it is asked to use Videos with annotation files. I don’t have original videos from where the frames were extracted, nor do I know how to create the annotation files for videos. Please help with this. How do I use my existing dataset to generate optimal parameters for my pipeline?

Images dataset is not supported by Pipetuner. The separated images can’t be used for tracker tunning.

Q1: Is it possible to only tune the detector parameters?
Q2: How to create our custom video dataset?

Can you please answer my queries?

Q1: Yes, it is possible. In the config file where you set the param ranges, you can comment out all tracker-related params, while leaving PGIE params. Then, PipeTuner will tune only PGIE params.

Q2: PipeTuner supports only video-based dataset. If you have images extracted from some videos, you can create videos by encoding those image frames into video using some video editing/creation tools, I believe.