Inference with webcam crashes after ~50 Frames

Description

Inference with webcam crashes after ~50 Frames → TensorRT execution time increases from 70ms to ~2s before

I assume a systematic problem with the while loop

Environment

JetPack Version: 4.4
TensorFlow Version: 1.15.3

Relevant Files

inference.py (3.9 KB) frozen_out_dec__batch.onnx (2.3 MB)

Hi, Request you to share your model and script, so that we can help you better.

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

Script and model are there:
https://forums.developer.nvidia.com/uploads/short-url/iMMMSLzQDIMJG6e07vV4duySrA8.py
https://forums.developer.nvidia.com/uploads/short-url/d607NNL1yylDBcVPR6oYmfDEf9K.onnx

I assume a problem with either OpenCV Framebuffer or CUDA Graficbuffer. Maybe you can have a look into the source code and give me a hint.

Thanks and have nice day!

Hi @toni.sedlmeier,

Problem is you are creating a new stream and context in the inference loop.
That should be hoisted out and passed in as an argument to the function IMO.

Please try fixing it.

Thank you.

Thanks you were absolutly right.
Which performance (fps etc.) can be reached with a non-custom trained SSD Mobilenet V2 and TensorRT API on Jetson Nano?

Hi @toni.sedlmeier,

Hope this will help you.
https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks

Thank you.