Trying TensorRT Docker to optimize YoloV3 on GTX 1050Ti

I am trying to optimize YoloV3 using TensorRT.

I came this post called
Have you Optimized your Deep Learning Model Before Deployment?
https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d

Used Enabling GPUs in the Container Runtime Ecosystem https://devblogs.nvidia.com/gpu-containers-runtime/ to install nvidia-docker2

Pulled the latest version of the docker image docker pull aminehy/tensorrt-opencv-python3:version-1.3 from https://hub.docker.com/r/aminehy/tensorrt-opencv-python3/tags

Ran this

$sudo docker run -it --rm -v $(pwd):/workspace --runtime=nvidia -w /workspace -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY aminehy/tensorrt-opencv-python3:version-1.3

Got this error

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 19.05 (build 6392482)

NVIDIA TensorRT 5.1.5 (c) 2016-2019, NVIDIA CORPORATION.  All rights reserved.
Container image (c) 2019, NVIDIA CORPORATION.  All rights reserved.

https://developer.nvidia.com/tensorrt

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

root@a38b20eeb740:/workspace# cd /opt/tensorrt/python/
root@a38b20eeb740:/opt/tensorrt/python# chmod +x python_setup.sh 
root@a38b20eeb740:/opt/tensorrt/python# ./python_setup.sh
Requirement already satisfied: Pillow in /usr/local/lib/python3.5/dist-packages (from -r /opt/tensorrt/samples/sampleSSD/requirements.txt (line 1)) (6.0.0)
WARNING: You are using pip version 19.2.1, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Ignoring torch: markers 'python_version == "3.7"' don't match your environment
......
......
......
Setting up graphsurgeon-tf (5.1.5-1+cuda10.1) ...
Setting up uff-converter-tf (5.1.5-1+cuda10.1) ...
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/dist-packages/uff/__init__.py", line 1, in <module>
    from uff import converters, model  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/model/__init__.py", line 1, in <module>
    from . import uff_pb2 as uff_pb  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/model/uff_pb2.py", line 6, in <module>
    from google.protobuf.internal import enum_type_wrapper
ImportError: No module named google.protobuf.internal
chmod: cannot access '/bin/convert_to_uff.py': No such file or directory

The post says it is for Jetson device don’t we have a similar method for GTX platform?

Did post a question on here to https://stackoverflow.com/questions/59456090/bin-convert-to-uff-py-no-such-file-or-directory?noredirect=1#59456090