I am running the tensorrt sample (https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffSSD)on TX2 with following enviroments.
**
ubuntu 18.04
TensorRT 7.1
JetPack 4.4
CUDA 10.2
cudnn 8.0
python 3.6.5
**
I follow the tutorial of this sample and finally get an error
./sample_uff_ssd --datadir=…/…/data_ssd
[09/04/2020-14:01:43] [I] Building and running a GPU inference engine for SSD
[09/04/2020-14:02:32] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[09/04/2020-14:02:44] [I] [TRT] Detected 1 inputs and 2 output network tensors.
#assertion/home/nvidia/folder/TensorRT/plugin/gridAnchorPlugin/gridAnchorPlugin.cpp,206
Aborted (core dumped)
I didnt change any setting of this sample only get the uff model file as suggested.And I also get the sample_uff_mnist run correctly.Can someone helps figure it out whats going wrong.
Many thanks