Use Tensorflow models for inference on PX2 - problems

Hello,

we want to integrate several models on the PX2 which are trained by using the TF framework. According to the documentation, it is possible to first transform the models to a .uff file which then can be used for INT8 optimization on the host using TensorRT. After that, one can use the created binary for kernel autotuning on the target (PX2) with a calibration cache file within the “tensorRT_optimization” tool available . Since TensorRT is supporting TF models now, we were quite confident in getting our models to the PX2. But now we are facing some problems…

  1. custom layers: how can we use models in this deployment pipeline with custom layers and unsupported operations? The uff-parser for the TF models throws errors. In our case we use an object detection network based on the idea of the SSD architecture (custom normalization and bounding box layer). How can we deal with custom layers in TF models? We could not find a clear statement if its possible or not.

  2. INT8 calibration: when we wanted to use the INT8 calibration for TF models possible with TensorRT on the host, we sticked to the example from your website (https://devblogs.nvidia.com/int8-inference-autonomous-vehicles-tensorrt/). As a first approach, we tried to use the MNIST model for gaining experience on the calibration, since you do not provide the model files from this example. By exactly using the provided code, except for replacing the model, the images and the input and output layers/dimensions, we were facing the following error when it comes to creating the TensorRT lite engine:

Traceback (most recent call last):
  File "/home/maximilian/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/173.4301.16/helpers/pydev/pydevd.py", line 1668, in <module>
    main()
  File "/home/maximilian/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/173.4301.16/helpers/pydev/pydevd.py", line 1662, in main
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/home/maximilian/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/173.4301.16/helpers/pydev/pydevd.py", line 1072, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/home/maximilian/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/173.4301.16/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/media/Data/01_Team_Working_Files/01_Maximilian/04_Create_TensorRT_Engine/trt_lite_FCN8_int8.py", line 46, in <module>
    logger_severity=trt.infer.LogSeverity.INFO)
  File "/usr/lib/python3.5/dist-packages/tensorrt/lite/engine.py", line 195, in __init__
    self._create_engine(modelstream, **kwargs)
  File "/usr/lib/python3.5/dist-packages/tensorrt/lite/engine.py", line 319, in _create_engine
    kwargs.get("calibrator", None))
  File "/usr/lib/python3.5/dist-packages/tensorrt/utils/_utils.py", line 341, in caffe_to_trt_engine
    engine = builder.build_cuda_engine(network)
RuntimeError: SWIG director pure virtual method called nvinfer1::IInt8Calibrator::readCalibrationCache

Without the calibration arguments in the trt.lite.Engine, we were able to save the engine, but this is not according to our requirements.

I hope you can help us to understand better the integration of TF models on the PX2. Thank you.

Hello,
are there any news regarding this problems? Especially the second topic (SWIG error) is also the show-stopper at my system.

We found a solution for the second problem (SWIG Error):
During copying the calibrator.py from the blogpost ([url]https://devblogs.nvidia.com/int8-inference-autonomous-vehicles-tensorrt/[/url]) the indentation (tabs and spaces) got lost. Just fixing the indentation solves the SWIG Error :-)