Here is where I am stuck at tao-converter
I keep using the same key. What may be the problem? Thx
nvidia@ubuntu:~/Downloads/files$ docker login nvcr.io
Username: $oauthtoken
Password:
WARNING! Your password will be stored unencrypted in /home/nvidia/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Then, I went to that json and found another key…
nvidia@ubuntu:~/Downloads/files$ ./tao-converter -k JG9hdXRodG9rZW46TjJoa1lXZHpkV2hsWVRkeWNHRTViRGMwTURGcmRHMHdZM1E2TldJNVlUSTVNakF0Tm1Gall5MDBZVEl5TFRrMFpHRXRaalJsTmpVM1xxxxxxxxxx(Hidden) bodyposenet_deployable_v1.0.1/model.etlt
[INFO] [MemUsageChange] Init CUDA: CPU +303, GPU +0, now: CPU 327, GPU 9635 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +403, GPU +378, now: CPU 749, GPU 10033 (MiB)
[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/filevMHXE5
[INFO] ONNX IR version: 0.0.0
[INFO] Opset version: 0
[INFO] Producer name:
[INFO] Producer version:
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[INFO] Model has no dynamic shape.
[ERROR] 4: [network.cpp::validate::2722] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine
Segmentation fault (core dumped)
nvidia@ubuntu:~/Downloads/files$
nvidia@ubuntu:~/Downloads/files$ # Set dimensions of desired output model for inference/deployment
nvidia@ubuntu:~/Downloads/files$ INPUT_SHAPE=288x384x3
nvidia@ubuntu:~/Downloads/files$ # Set input name
nvidia@ubuntu:~/Downloads/files$ INPUT_NAME=input_1:0
nvidia@ubuntu:~/Downloads/files$ # Set opt profile shapes
nvidia@ubuntu:~/Downloads/files$ MAX_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$ OPT_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$
nvidia@ubuntu:~/Downloads/files$ # Convert to TensorRT engine (FP16).
nvidia@ubuntu:~/Downloads/files$ tao converter /home/nvidia/Downloads/files/bodyposenet_deployable_v1.0.1/model.etlt
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
2022-08-23 11:55:25,149 [INFO] root: Registry: ['nvcr.io']
2022-08-23 11:55:25,242 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-08-23 11:55:25,245 [INFO] tlt.components.docker_handler.docker_handler: The required docker doesn't exist locally/the manifest has changed. Pulling a new docker.
2022-08-23 11:55:25,245 [INFO] tlt.components.docker_handler.docker_handler: Pulling the required container. This may take several minutes if you're doing this for the first time. Please wait here.
...
Pulling from repository: nvcr.io/nvidia/tao/tao-toolkit-tf
For the tao-converter, still has error…
I have added KEY=my own key before
nvidia@ubuntu:~/Downloads/files$ # Set dimensions of desired output model for inference/deployment
nvidia@ubuntu:~/Downloads/files$ INPUT_SHAPE=288x384x3
nvidia@ubuntu:~/Downloads/files$ # Set input name
nvidia@ubuntu:~/Downloads/files$ INPUT_NAME=input_1:0
nvidia@ubuntu:~/Downloads/files$ # Set opt profile shapes
nvidia@ubuntu:~/Downloads/files$ MAX_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$ OPT_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$
nvidia@ubuntu:~/Downloads/files$ # Convert to TensorRT engine (FP16).
nvidia@ubuntu:~/Downloads/files$ tao converter /workspace/tao-experiments/bpnet/models/exp_m1_final/bpnet_model.deploy.etlt -k $KEY -t fp16 -e /workspace/tao-experiments/bpnet/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.fp16.deploy.engine -p ${INPUT_NAME},1x$INPUT_SHAPE,${OPT_BATCH_SIZE}x$INPUT_SHAPE,${MAX_BATCH_SIZE}x$INPUT_SHAPE
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
2022-08-23 12:18:41,400 [INFO] root: Registry: ['nvcr.io']
2022-08-23 12:18:41,486 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-08-23 12:18:41,496 [INFO] root: No mount points were found in the /home/nvidia/.tlt_mounts.json file.
2022-08-23 12:18:41,496 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/nvidia/.tlt_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
Error response from daemon: Container 30459d860c80fd8976cbd3bb551385b1d9ddca99022a5fd651dd03ea49a98f95 is not running
2022-08-23 12:18:42,269 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
Traceback (most recent call last):
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 259, in _raise_for_status
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/30459d860c80fd8976cbd3bb551385b1d9ddca99022a5fd651dd03ea49a98f95/stop
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nvidia/.local/bin/tao", line 8, in <module>
sys.exit(main())
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/entrypoint/entrypoint.py", line 113, in main
local_instance.launch_command(
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/instance_handler/local_instance.py", line 319, in launch_command
docker_handler.run_container(command)
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 316, in run_container
self.stop_container()
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 323, in stop_container
self._container.stop()
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/models/containers.py", line 436, in stop
return self.client.api.stop(self.id, **kwargs)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/container.py", line 1167, in stop
self._raise_for_status(res)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found ("No such container: 30459d860c80fd8976cbd3bb551385b1d9ddca99022a5fd651dd03ea49a98f95")
nvidia@ubuntu:~/Downloads/files$
nvidia@ubuntu:~/Downloads/files$ tao converter /home/nvidia/Downloads/files/bodyposenet_deployable_v1.0.1/bpnet_model.deploy.etlt -k nvidia_tlt -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 -t fp16 -m 16 -e trt.engine
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
2022-08-23 12:26:45,634 [INFO] root: Registry: ['nvcr.io']
2022-08-23 12:26:45,721 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-08-23 12:26:45,740 [INFO] root: No mount points were found in the /home/nvidia/.tlt_mounts.json file.
2022-08-23 12:26:45,740 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/nvidia/.tlt_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
Error response from daemon: Container 27f76784a36be87ef687dcc60628ef011fc19be3a139cf2d69b180441c0d9a8b is not running
2022-08-23 12:26:46,622 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
Traceback (most recent call last):
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 259, in _raise_for_status
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/27f76784a36be87ef687dcc60628ef011fc19be3a139cf2d69b180441c0d9a8b/stop
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nvidia/.local/bin/tao", line 8, in <module>
sys.exit(main())
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/entrypoint/entrypoint.py", line 113, in main
local_instance.launch_command(
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/instance_handler/local_instance.py", line 319, in launch_command
docker_handler.run_container(command)
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 316, in run_container
self.stop_container()
File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 323, in stop_container
self._container.stop()
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/models/containers.py", line 436, in stop
return self.client.api.stop(self.id, **kwargs)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/container.py", line 1167, in stop
self._raise_for_status(res)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/nvidia/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found ("No such container: 27f76784a36be87ef687dcc60628ef011fc19be3a139cf2d69b180441c0d9a8b")
nvidia@ubuntu:~/Downloads/files$
Yours can run, thx. I tried my bpnet_model.etlt and it has created the engine. :>
And now, I got the trt.engine, how to use it in deepstream? (I am looking at this link, but it only said about the pre-trained models, Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation)
And can I convert it to onnx or something so that I can use the model in dusty inference? Thx
Btw, in my picture, it has engine file already, does it mean I don’t need to run tao-converter? Thx
The reason why I need bpnet because I want to use dusty-nv inference.
And we want to train the pose model for better result.
Btw, I have created a forum, How to use the trt engine in dusty nv posenet?
And also please use similar command as below to run. I can run successfully in my Orin.
$ ./deepstream-bodypose2d-app 1 …/…/…/configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 file:///home/nvidia/morgan/bpnet/deepstream_tao_apps_master/apps/tao_others/deepstream-bodypose2d-app/original-image.png ./body2dout
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks