Loss of precision to onnx converter for engine by deepstream 6.3

In theory it should be the same if they are in the same codec format. Could you try to change your local video to 5 fps and try?

ffmpeg -i <your_video.mp4> -r 5 <output_video.mp5>

From the video I was able to make it work correctly this way, but I need to fix this in the DeepStream settings, because the same thing is happening when getting the camera stream. When the camera is at 30FPS it works correctly, but when it is at 5FPS it does not. In my application I have to use 5FPS so I have to make it work at 5FPS.

I saw the parameter “camera-fps-n” in the source group, but I couldn’t use it in my application. I couldn’t find the place in the pipeline to add it.

I also tried using videorate and capsfilter to change the pre-processing to 5FPS in pipeline of DeepStream but they made no difference.

videorate = Gst.ElementFactory.make(“videorate”, “rate”)
videorate.set_property(“max-rate”, 5)

caps_filter = Gst.ElementFactory.make(“capsfilter”, “filter”)
caps = Gst.Caps.from_string(“video/x-raw,framerate=5/1”)
caps_filter.set_property(“caps”, caps)

#Add to pipline
self.pipeline.add(videorate)
self.pipeline.add(caps_filter)

What can I do to solve this problem of inferences with BBOXs displaced from the camera at 5FPS?

How do you run that on your side? Could you attach your whole pipeline to us? If it is run according to the app_config.txt I posted before, theoretically it has nothing to do with the fps.

To run the code, just execute the run.py script inside ds-analytics code.
Then, in the Draw folder, run the app.py script, which is a web server that runs on port 5000. Then, just go to the browser and enter localhost:5000. The web page will show the application output.

The application’s input videos or camera are added to the rtsp-simple-server_ffmpeg.0.20.0.yml file inside the rtsp-simple-server folder:

vda_pintura0_new:

  runOnDemand: ffmpeg -i rtsp://admin:admin@175.16.102.9 -c copy -an -f rtsp rtsp://localhost:$RTSP_PORT/$RTSP_PATH

I tried adding the -r parameter varying the values ​​from 5 to 60, but there was no change in the inference output. Like that:

vda_pintura0_new:

  runOnDemand: ffmpeg -i rtsp://admin:admin@175.16.102.9 -r 5 -c copy -an -f rtsp rtsp://localhost:$RTSP_PORT/$RTSP_PATH

In app_config.yaml, inside ds-analytics/config/ you can configure the input source:

uri: rtsp://rtsp-server:8554/vda_pintura0

This link contains the repository with all the code I’m using.

Your project is too large to narrow it down. Can you run through deepstream_app_config.txt first to narrow that down?
Just modify the [source0] to rtsp source like below:

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=<your rtsp source>
num-sources=1
gpu-id=0
nvbuf-memory-type=0

Then run the command below:

deepstream-app -c deepstream_app_config.txt

I used the following command:

deepstream-app -c deepstream_app_config.txt

With the following configuration files:
deepstream_app_config.txt (1.1 KB)
config_infer_primary_yoloV8.txt (655 Bytes)

the inferences worked correctly on the camera configured at 5FPS. So the error that is causing the inferences on the 5FPS camera to have the BBOXs displaced is in my DeepStream application.
Any tips on how I can find where the problem is?

Yes. You can add your plugins in the config file one by one and check which plugin caused the problem first.

  1. Get the pipeline graph from your own app by referring to the FAQ.
  2. Add the plugins one by one to the config file

The problem is when I add the tracker. For some reason when the tracker is used, the inferences at 5FPS lose quality.

deepstream_app_config.txt (1.4 KB)
config_tracker_NvDCF_accuracy.yml.zip (3.0 KB)

I tried to change the tracker parameter values ​​but the problem persisted.
Any guesses about the parameters that might be causing this?

Could you try to tune the parameters of StateEstimator in the tracker?

Thank you very much.

I changed the parameter value of StateEstimator from 1 to 0 and now it is working correctly.

Glad to hear that. If there are any new issues, you can file a new topic.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.