• Hardware Platform (Jetson / GPU)
Jetson Xavier NX
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6
• Issue Type( questions, new requirements, bugs)
question
I’m trying to improve accuracy of my pipeline that detects people with faces (not just people).
As my code is based on deepstream-app, a lot of pipeline’s internals in hidden or pretty complex as it supports multiple sources, various formats, different sinks etc.
While debugging my app I needed an ability to save not only detected objects but a whole frame with all boxes/labels drawn. I tried attaching a probe function to various elements of the pipeline (nvosd, transformer etc) (generated its graph first) but it either saves a picture without any drawings or crashes.
So my first question is how one can achieve this goal?
As I’m trying to find some patterns to filter out false positives (person with someone else’s face in its bbox), it would be nice to see each frame with all drawings on the screen paused AND proceed frame by frame.
The app allows to pause/resume playback and it works well but one cannot achieve frame-by-frame accuracy. I tried approach from this tutorial and it doesn’t work as expected - I use appCtx[0]->pipeline.tiler_tee
as data.video_sink
and it does make one step if I pause the pipeline and then press ‘n’ but after that I have to resume the playback and pause it again to perform another step so it’s not frame-by-frame solution yet.
Can anyone advise me on how to implement that step idea to deepstream-app so I have frame-accurate steps?
And speaking about steps and other trick modes, I was unable to change my pipeline’s playback speed at all, how can it be done (as sometimes it would be great to slow it down a bit)?