TensorRT5 sample_uff_ssd can't run

I did all operation as the READ.ME.And my friend did the same thing on the other computer.The same result is below.

../../../data/ssd/sample_ssd_relu6.uff
Begin parsing model...
ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::70, condition: input.values != nullptr
ERROR: UFFParser: Parser error: FeatureExtractor/InceptionV2/zeros: reshape weights failed!
ERROR: sample_uff_ssd: Fail to parse
sample_uff_ssd: sampleUffSSD.cpp:540: int main(int, char**): Assertion `tmpEngine != nullptr' failed.

Provide details on the platforms you are using:
Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version
If Jetson, OS, hw versions

Describe the problem

Files

Include any logs, source, models (uff, pd) that would be helpful to diagnose the problem.

If relevant, please include the full traceback.

Try to provide a minimal reproducible test case.

First one:

host-x86
Ubuntu 16.04 GTX850M
CUDA9.0+CUDNN 7 PYTHON 3.5
Tensorflow 1.10
TensorRT 5

Second one:

jetson xavier
Ubuntu 18.04
CUDA10.0+CUDNN 7 PYTHON 3.6
Tensorflow 1.10
TensorRT 5

Only the command ./sample_uff_ssd is wrong.
Both of them are the same error.

I’ve tried this with Ubuntu 16.04, CUDA9 / CUDNN 7.3 / python 3.6 / Tensorflow 1.12 / TensorRT / 5.0.2.6 and also with the 18.12 version container from the NGC container registry (https://ngc.nvidia.com). It works in both, so you might want to try pulling nvcr.io/nvidia/tensorrt:18.12-py3 and looking into differences from there. It’s likely something didn’t get installed or didn’t get installed correctly.

However,I can run ./sample_uff_mnist and convert the frozen file .I don’t think something didn’t get installed or didn’t get installed correctly.

Is that on the GTX 850M? Let’s focus on that and use the tensorRT 18.12 container from the NGC registry (step-by-step is below). When you run the “convert-to-uff --input-file frozen_inference_graph.pb -O NMS -p config.py” instruction from the sampleUffSSD/README.txt, what’s your output?

nvidia-docker run -ti --rm nvcr.io/nvidia/tensorrt:18.12-py3

Once the container is started:

  1. /opt/tensorrt/python/python_setup.sh
  2. cd tensorrt/samples/sampleUffSSD/
  3. wget http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz
  4. tar xzf ssd_inception_v2_coco_2017_11_17.tar.gz
  5. cd ssd_inception_v2_coco_2017_11_17
  6. cp ../config.py .
  7. convert-to-uff --input-file frozen_inference_graph.pb -O NMS -p config.py
  8. cp frozen_inference_graph.uff /opt/tensorrt/data/ssd/sample_ssd_relu6.uff
  9. cd ..
  10. make
  11. cd ../../bin/
  12. ./sample_uff_ssd

I find the point is that I used the object_detection zoo’s ssd_inception_v2_coco_17_11_2017.It’s a wrong version.Thanks.

I get the same error as shown in the post. If the model is wrong which updated model should I use. Currently, I am using the SSD model whose link is provided in the ReadME file from TensorRT.

i meet the same error. which model i should use.

i meet the same error. can u tell me which model i should use.