Hi, I am trying to deploy the gaze detection model. I am following the steps explained here TLT CV Inference Pipeline Quick Start Scripts — Transfer Learning Toolkit 3.0 documentation . However, when I run tlt_cv_init.sh I get the follow error: Error: no input dimensions given. This happen in this step: ./tlt_cv_compile.sh gaze $tlt_encode_key_gaze (inside tlt_cv_init.sh file)
I already check the models, model.etlt is in the right directory, however, I did not find the model.plan in the path ${repo_location}/gaze_facegrid_tlt/1/
Can someone help me?
Thanks
I am using Docker.
My Dockerfile is :
syntax=docker/dockerfile:1
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y wget
zip
vim
Install docker
RUN apt install -y apt-transport-https ca-certificates curl software-properties-common gnupg-agent
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository “deb [arch=amd64] Index of linux/ubuntu/ bionic stable”
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
NGC CLI binary
RUN cd /usr/local/bin && wget -O ngccli_cat_linux.zip https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip -o ngccli_cat_linux.zip && chmod u+x ngc
RUN --mount=type=secret,id=apisecret,dst=/secret/apisecret.txt cat secret/apisecret.txt | ngc config set
TLT
RUN ngc registry resource download-version “nvidia/tlt_cv_inference_pipeline_quick_start:v0.1-dp”
WORKDIR tlt_cv_inference_pipeline_quick_start_vv0.1-dp
RUN cd scripts && chmod +x *.sh
ENV ENCODING_KEY=nvidia_tlt
##################################
Once I created my image I use:
docker run -v /var/run/docker.sock:/var/run/docker.sock -it jarvis-gaze-model bash
to launch my container. Inside it I got do /tlt_cv_inference_pipeline_quick_start_vv0.1-dp/scripts folder and execute tlt_cv_init.sh. The logs for this last operation are logs of downloading containers and models. There are a lot of logs, I save them in a file, because I exced the characters limit when I tried to paste them here
full_logs.txt (191.8 KB)
)
Before Dowloading I get the following logs
[INFO] Finished pulling containers and models
[INFO] Beginning TensorRT plan compilation with tlt-converter…
[INFO] This may take a few minutes
[INFO] Using this location for models: /root/Downloads/tlt_cv_inference_pipeline_models
[INFO] Compiling Body Pose with key ‘nvidia_tlt’…
=====================
== NVIDIA TensorRT ==
NVIDIA Release 20.11 (build 17147175)
NVIDIA TensorRT 7.2.1 (c) 2016-2020, NVIDIA CORPORATION. All rights reserved.
Container image (c) 2020, NVIDIA CORPORATION. All rights reserved.
https://developer.nvidia.com/tensorrt
To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
To install the open-source samples corresponding to this TensorRT release version run /opt/tensorrt/install_opensource.sh.
To build the open source parsers, plugins, and samples for current top-of-tree on master or a different branch, run /opt/tensorrt/install_opensource.sh -b
See GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators. for more information.
Error: no input dimensions given
#############
This specific error happens compiling Body Pose, but the same error and logs happens when I only use the gase estimation