I modified sampleUffSSD and wanted to run inference on videos(4 videos) using OpenCV.
I used to modified same example (sampleUffSSD) of tensorRT 5.0.1 to do the same thing. The performance is better using example of tensorRT 5.0.1.(both on Jetson Xavier)
I only modified the input preprocessing and output verifying(enable to write Mat into buffer and use output to draw boxes on images).
I don’t need to add waitKey in tensorRT 5.0.1 when displaying image with imshow(). But I need to add this in example of 6.0.1. Or there would be segmentation fault. Maybe caused by running out of GPU resource?
So I would like to know whether this is caused by tensorRT or opencv.( opencv is 3.4.3 )?
TensorRT Version: 6.0.1
Nvidia Driver Version:
CUDA Version: 10.0
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered