Deploying models using tlt_cv_pipeline

Following are the details of the system I am working on:

  1. Ubuntu 18.04
  2. GPU: GeForce GTX 1080
  3. TLT version: 3.0

I am trying to run the Deployable model of GestureNet available on NGC, which requires the tlt_cv_pipeline. I am following the steps provided in the image below.

And this is the error I am getting when I am running the
“bash gesture $ENCODING_KEY” command.

Did you ever modify ?

No, I did not modify that.
I realised that I only have to run that command if I do any modification.
But still not able to figure out how to run the model.

You need not do any modification.

Yes, but how do I run the model file to see the inference then, can’t figure that out.

I am still checking if I can reproduce.


Where did you run, in host PC or Jetson devices?

On my PC

I also run on a host PC , but cannot reproduce.

$ bash gesture nvidia_tlt
[INFO] Using this location for models: /home/morganh/Downloads/tlt_cv_inference_pipeline_models

[INFO] Compiling Gesture with key ‘nvidia_tlt’…

== NVIDIA TensorRT ==

NVIDIA Release 20.11 (build 17147175)

NVIDIA TensorRT 7.2.1 (c) 2016-2020, NVIDIA CORPORATION. All rights reserved.
Container image (c) 2020, NVIDIA CORPORATION. All rights reserved.

NVIDIA TensorRT | NVIDIA Developer

To install Python sample dependencies, run /opt/tensorrt/python/

To install the open-source samples corresponding to this TensorRT release version run /opt/tensorrt/
To build the open source parsers, plugins, and samples for current top-of-tree on master or a different branch, run /opt/tensorrt/ -b
See GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators. for more information.

[WARNING] /home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 3, 160, 160)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 160, 160) for input: input_1
[INFO] Using optimization profile opt shape: (1, 3, 160, 160) for input: input_1
[INFO] Using optimization profile max shape: (2, 3, 160, 160) for input: input_1
[WARNING] Half2 support requested on hardware without native FP16 support, performance will be negatively affected.
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 1 output network tensors.
[INFO] Finished Gesture

Can you try another way as below?
Just run following command according to TLT CV Inference Pipeline Quick Start Scripts — Transfer Learning Toolkit 3.0 documentation

$ bash

BTW, your error “no input dimension given” is similar to this topic