Deploying models using tlt_cv_pipeline

Following are the details of the system I am working on:

  1. Ubuntu 18.04
  2. GPU: GeForce GTX 1080
  3. TLT version: 3.0

I am trying to run the Deployable model of GestureNet available on NGC, which requires the tlt_cv_pipeline. I am following the steps provided in the image below.

And this is the error I am getting when I am running the
“bash tlt_cv_compile.sh gesture $ENCODING_KEY” command.

Did you ever modify tlt_cv_compile.sh ?

No, I did not modify that.
I realised that I only have to run that command if I do any modification.
But still not able to figure out how to run the model.

You need not do any modification.

Yes, but how do I run the model file to see the inference then, can’t figure that out.

I am still checking if I can reproduce.

okay

Where did you run, in host PC or Jetson devices?

On my PC

I also run on a host PC , but cannot reproduce.

$ bash tlt_cv_compile.sh gesture nvidia_tlt
[INFO] Using this location for models: /home/morganh/Downloads/tlt_cv_inference_pipeline_models


[INFO] Compiling Gesture with key ‘nvidia_tlt’…

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 20.11 (build 17147175)

NVIDIA TensorRT 7.2.1 (c) 2016-2020, NVIDIA CORPORATION. All rights reserved.
Container image (c) 2020, NVIDIA CORPORATION. All rights reserved.

TensorRT SDK | NVIDIA Developer

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version run /opt/tensorrt/install_opensource.sh.
To build the open source parsers, plugins, and samples for current top-of-tree on master or a different branch, run /opt/tensorrt/install_opensource.sh -b
See GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. for more information.

[WARNING] /home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 3, 160, 160)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 160, 160) for input: input_1
[INFO] Using optimization profile opt shape: (1, 3, 160, 160) for input: input_1
[INFO] Using optimization profile max shape: (2, 3, 160, 160) for input: input_1
[WARNING] Half2 support requested on hardware without native FP16 support, performance will be negatively affected.
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 1 output network tensors.
[INFO] Finished Gesture

Can you try another way as below?
Just run following command according to TLT CV Inference Pipeline Quick Start Scripts — Transfer Learning Toolkit 3.0 documentation

$ bash tlt_cv_init.sh

BTW, your error “no input dimension given” is similar to this topic