Version mismatch help to rectify please :)

Heya I have been trying to solve an issue while attempting to convert a saved model using TesnorRT on the nano. There appears to be a version mismatch but I do not know whether i need to change the TF version or the TensorRT version and how to go about doing it.

My current versions:

dpkg -l | grep TensorRT
ii  graphsurgeon-tf                               5.1.6-1+cuda10.0                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-dev                                5.1.6-1+cuda10.0                                 arm64        TensorRT development libraries and headers
ii  libnvinfer-samples                            5.1.6-1+cuda10.0                                 all          TensorRT samples and documentation
ii  libnvinfer5                                   5.1.6-1+cuda10.0                                 arm64        TensorRT runtime libraries
ii  python-libnvinfer                             5.1.6-1+cuda10.0                                 arm64        Python bindings for TensorRT
ii  python-libnvinfer-dev                         5.1.6-1+cuda10.0                                 arm64        Python development package for TensorRT
ii  python3-libnvinfer                            5.1.6-1+cuda10.0                                 arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        5.1.6-1+cuda10.0                                 arm64        Python 3 development package for TensorRT
ii  tensorrt                                      5.1.6.1-1+cuda10.0                               arm64        Meta package of TensorRT
ii  uff-converter-tf                              5.1.6-1+cuda10.0                                 arm64        UFF converter for TensorRT package
sudo pip list | grep tensorflow
tensorflow                    2.0.0              
tensorflow-estimator          2.0.1

I also have CUDA 10 and cudnn 7 installed. Using Ubuntu 18.04

This is my converter code:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
import pathlib as plib

input_saved_model_dir = plib.Path('3conv-64nodes-2dense-CNN-001.model/')
output_saved_model_dir = plib.Path('outconv/')

input_saved_model_dir = str(input_saved_model_dir)
output_saved_model_dir = str(output_saved_model_dir)

params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
    precision_mode='FP16')

converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir, conversion_params=params)
converter.convert()
converter.save(output_saved_model_dir)

These are the errors I get:

WARNING:tensorflow:TensorRT mismatch. Compiled against version 5.0.6, but loaded 5.1.6. Things may not work
Not found: Container TF-TRT does not exist. (Could not find resource: TF-TRT/TRTEngineOp_0

Full output just in case I have missed anything:

2020-03-10 16:25:43.616944: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-10 16:25:54.459729: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.5
WARNING:tensorflow:TensorRT mismatch. Compiled against version 5.0.6, but loaded 5.1.6. Things may not work
2020-03-10 16:25:55.792285: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-03-10 16:25:55.835161: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:25:55.835321: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-03-10 16:25:55.835378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-10 16:25:55.835466: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-10 16:25:55.913373: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-10 16:25:56.023470: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-10 16:25:56.159952: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-10 16:25:56.223664: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-10 16:25:56.223872: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-10 16:25:56.224181: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:25:56.224410: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:25:56.224502: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-10 16:25:56.226939: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:25:56.227093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-03-10 16:25:56.227269: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-10 16:25:56.227374: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-10 16:25:56.227460: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-10 16:25:56.227548: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-10 16:25:56.227616: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-10 16:25:56.227705: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-10 16:25:56.227786: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-10 16:25:56.228238: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:25:56.228966: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:25:56.229115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-10 16:25:56.229273: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-10 16:26:10.620609: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-10 16:26:10.649759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0
2020-03-10 16:26:10.649931: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N
2020-03-10 16:26:10.671262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:10.671611: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:10.671844: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:10.770629: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 267 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-03-10 16:26:18.823623: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:18.823839: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-03-10 16:26:18.824153: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-03-10 16:26:18.825730: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:18.825899: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-03-10 16:26:18.994212: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-10 16:26:19.134832: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-10 16:26:19.202993: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-10 16:26:19.219346: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-10 16:26:19.241707: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-10 16:26:19.264213: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-10 16:26:19.298443: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-10 16:26:19.298763: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:19.299349: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:19.299738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-10 16:26:19.299918: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-10 16:26:19.299981: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0
2020-03-10 16:26:19.300035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N
2020-03-10 16:26:19.300310: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:19.301518: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:19.301714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 267 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-03-10 16:26:19.688573: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: graph_to_optimize
2020-03-10 16:26:19.688741: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 66 nodes (51), 115 edges (100), time = 60.421ms.
2020-03-10 16:26:19.688798: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.322ms.
2020-03-10 16:26:20.536236: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:20.541027: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-03-10 16:26:20.541249: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-03-10 16:26:20.542883: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:20.543060: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-03-10 16:26:20.543256: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-10 16:26:20.543354: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-10 16:26:20.543415: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-10 16:26:20.543467: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-10 16:26:20.543525: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-10 16:26:20.543581: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-10 16:26:20.543633: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-10 16:26:20.543823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:20.544057: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:20.544169: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-10 16:26:20.544280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-10 16:26:20.544318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0
2020-03-10 16:26:20.544350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N
2020-03-10 16:26:20.544581: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:20.544855: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2020-03-10 16:26:20.545009: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 267 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-03-10 16:26:20.909883: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:460] There are 6 ops of 3 different types in the graph that are not converted to TensorRT: Identity, NoOp, Placeholder, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).
2020-03-10 16:26:20.911385: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:633] Number of TensorRT candidate segments: 1
2020-03-10 16:26:20.920581: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:734] TensorRT node TRTEngineOp_0 added for segment 0 consisting of 42 nodes succeeded.
2020-03-10 16:26:20.991342: W tensorflow/compiler/tf2tensorrt/convert/trt_optimization_pass.cc:183] TensorRTOptimizer is probably called on funcdef! This optimizer must *NOT* be called on function objects.
2020-03-10 16:26:21.013730: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: tf_graph
2020-03-10 16:26:21.013829: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   constant folding: Graph size after: 54 nodes (-12), 91 edges (-24), time = 140.433ms.
2020-03-10 16:26:21.013870: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   layout: Graph size after: 58 nodes (4), 95 edges (4), time = 64.625ms.
2020-03-10 16:26:21.013904: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   constant folding: Graph size after: 58 nodes (0), 95 edges (0), time = 14.197ms.
2020-03-10 16:26:21.013935: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   TensorRTOptimizer: Graph size after: 17 nodes (-41), 18 edges (-77), time = 51.467ms.
2020-03-10 16:26:21.013963: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   constant folding: Graph size after: 17 nodes (0), 18 edges (0), time = 9.46ms.
2020-03-10 16:26:21.013990: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: TRTEngineOp_0_native_segment
2020-03-10 16:26:21.014018: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   constant folding: Graph size after: 44 nodes (0), 55 edges (0), time = 12.928ms.
2020-03-10 16:26:21.014045: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   layout: Graph size after: 44 nodes (0), 55 edges (0), time = 9.789ms.
2020-03-10 16:26:21.014073: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   constant folding: Graph size after: 44 nodes (0), 55 edges (0), time = 9.548ms.
2020-03-10 16:26:21.014101: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   TensorRTOptimizer: Graph size after: 44 nodes (0), 55 edges (0), time = 1.177ms.
2020-03-10 16:26:21.014127: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   constant folding: Graph size after: 44 nodes (0), 55 edges (0), time = 9.806ms.
2020-03-10 16:26:37.587315: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at trt_engine_resource_ops.cc:183 : Not found: Container TF-TRT does not exist. (Could not find resource: TF-TRT/TRTEngineOp_0)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

I am so close to being able to do this, really appreciate any help pointing me in the right direction, Thank You!

Hi, I have changed CUDA and tensor RT versions using SDK manager.

According to warning message:

WARNING:tensorflow:TensorRT mismatch. Compiled against version 5.0.6, but loaded 5.1.6. Things may not work

You need the packages from an older JetPack that the current version installed in your Nano.

I have upgraded packages using this way but I’m not sure about a downgrade.

Just in case, in SDK Manager just mark Jetson SDK Components to avoid flashing the board.

Greivin F.

Thanks, that’s weird that I would need an older Jetpack. Is it the version of Tensorflow I’m using that requires the older stuff?

Also I tried to install sdk manager on the nano but it doesn’t work. it goes through the install but never appears as installed. I read that it doesn’t work on the nano. Do you have a link for one that works?

Thanks :)

EDIT: Just had a thought, perhaps this means I am using the TF-TRT in built to JetPack. Is there a TensorRT stand alone package I could install, maybe that will work?

Hi,

There are two possible issue:

1. Please install a TensorFlow package built on the same Jetpack version as you flashed.
For example, you can use tensorflow==1.15.2+nv20.2 package for JetPack4.3:
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html

2. Please noticed that TensorRT add the TFv2.0 support from version 7.0, which is not available for Jetson user yet.
It’s recommended to use TFv1.15 instead.

Thanks.

thank you so much! One last query, I coded my trainer with TF2 on my PC. Should I train it using TF1.15 as well or is it ok to optimize a model from tf2 via tf1.15?

Hi,

To train a model with same TensorFlow version will be better.
If this is not an option, you can also find some converter to convert your model back to v1.15.

Thanks.

Thanks again for your help :)