Error while executing the fastest RCNN example on the tlt officialy provided docker in my intel computer

@ai12
Firstly, some clarification and comments here.

  1. Should run tlt training on your computer. See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation. I think you already be aware of it.
  2. You already export to FP32 successfully. Please ignore the warning “The version of TensorFlow installed on this system is not guaranteed to work with UFF”. See Error at exporting to TRT engine in TLT - #4 by Morganh
  3. For “Specified FP16 but not supported on platform”, that’s because your gpu does not support FP16. See https://developer.nvidia.com/cuda-gpus#compute and Support Matrix :: NVIDIA Deep Learning TensorRT Documentation
1 Like