Pass a Single Frame to DeepStream Yolo App Instead of full H.264 video

Hi,
Is there anyway of passing a single frame to the deepstream-yolo-app instead of an entire h.264 video?
This is because we are working on an application that is based in python and requires per frame analysis.

Can you give more details of your pipeline? video → decoder → python analysis → … ?

We need to pass the bbox info (coordinates+labels+frame) to our deepSORT tracker which is in python. Where we’re at right now is, we are taking multiple streams in one app (modified the deepstream-yolo-app along the lines of deepstream-test3 app), so what we need to do next is segregate the bbox info along with frames according to the different input streams and pass it to python, framewise. However, as the apps take h.264 videos, we weren’t able to pass single frames to yolo for inference.
So to reiterate, there are 2 problems we need to solve. 1st, getting frame wise inference from yolo, 2nd pass it to the python tracker continuously.

I am still not clear your pipeline. “as the apps take h.264 videos, we weren’t able to pass single frames to yolo for inference” — why you need to do this ?

If you choose deepstream, it’s better put python alway. Python binding is next plan for deepstream.

why you have to use your deepSORT tracker? Can you try deepstream klt trakcer ?

Or you can refer to sample dsexample plugin to wrap your deepSORT tracker.

Hi @ChrisDing,
Yes let me clear the scenario we are currently dealing with.
Basically we need to pass frames, bbox values and labels to the DeepSort tracker which is in python using BOOST.
Firstly, I wanted to know if we could pass a single frame to the app. Our tentative workflow is as follows:

  • Read video using OpenCV in Python
  • Using BOOST, initialize and complete the first steps of loading the model,creating the pipeline etc by calling deepstream-yolo-app but not starting g_main_loop_run. (We've separated the initiialization and loop running part in the cpp in two separate functions that can be called using boost python.
  • Call the initialize function in python.
  • Now pass one frame from python to c++, set the source to this new frame, run the loop once, extract bbox values and labels, pass the values back to python.
  • In python, pass the values to the deepsort tracker
  • Read the next frame and repeat all the above mentioned steps.

This is the workflow we are trying to achieve.
Now, we’ve managed to update deepstream-yolo-app in accordance to deepstream-sample-test3 app to take in multiple streams with the Yolo inference engine. But if we could take in multiple frames from python, then we could complete our pipeline with the deepsort tracker.

If you could suggest any other way of going about this, it would help us a lot as well.
Basically, we need frames, bbox values and labels passed onto a deepsort tracker that is in python. Most of our applications are in python so we need to stick to this in the best case scenario.

@NvCJR has been extremely helpful with the solutions that he has provided. It would be great if the both of you could help us with this. Any kind of help is appreciated.

Best,
llprankll

Hi @ChrisDing,
Any Update on this issue? Also, any info on the timeline on when the Python wrapper will be out?

What’s the deepsort tracker output used for ?

My solution,
Remove opencv,
deepstream-yolo-app → add dsexample plugin, in this plugin, it can get original YUV buffer, yoloV3 bbox, label and so on, you can refer to dsexample in the deepstream release package. Create a process to send all these data to python code
→ python deepsort tracker → …

hi @llprankll,

what do i need to do in order to adapt deepstream test3 to run with yolo? and where do you capture inference info such as bounding box and labels?

This seems like a very solid example, just one question is that how do you pass the data from c++ dsexample to python code?