How to run bpnet in tao toolkit?

Hi,

I can’t find the output body2dout file… :<

But my focus is how to use a trained model in bpnet… I still stick at tao-converter. :<

nvidia@ubuntu:~/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app$ ./deepstream-bodypose2d-app 2 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 file:///home/nvidia/Pictures/original-image.png ./body2dout
Request sink_0 pad from streammux
joint Edges 1 , 8
joint Edges 8 , 9
joint Edges 9 , 10
joint Edges 1 , 11
joint Edges 11 , 12
joint Edges 12 , 13
joint Edges 1 , 2
joint Edges 2 , 3
joint Edges 3 , 4
joint Edges 2 , 16
joint Edges 1 , 5
joint Edges 5 , 6
joint Edges 6 , 7
joint Edges 5 , 17
joint Edges 1 , 0
joint Edges 0 , 14
joint Edges 0 , 15
joint Edges 14 , 16
joint Edges 15 , 17
connections 0 , 1
connections 1 , 2
connections 1 , 5
connections 2 , 3
connections 3 , 4
connections 5 , 6
connections 6 , 7
connections 2 , 8
connections 8 , 9
connections 9 , 10
connections 5 , 11
connections 11 , 12
connections 12 , 13
connections 0 , 14
connections 14 , 16
connections 8 , 11
connections 15 , 17
connections 0 , 15
Now playing: file:///home/nvidia/Pictures/original-image.png
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.631460108  8003 0xaaaad61d20c0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/nvidia/deepstream_tao_apps/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       288x384x3       min: 1x288x384x3     opt: 32x288x384x3    Max: 32x288x384x3    
1   OUTPUT kFLOAT heatmap_out/BiasAdd:0 36x48x19        min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT conv2d_transpose_1/BiasAdd:0 144x192x38      min: 0               opt: 0               Max: 0               

0:00:02.791108509  8003 0xaaaad61d20c0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /home/nvidia/deepstream_tao_apps/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine
0:00:03.024526111  8003 0xaaaad61d20c0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:../../../configs/bodypose2d_tao/bodypose2d_pgie_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...
Decodebin child added: pngparse0
Decodebin child added: pngdec0

(deepstream-bodypose2d-app:8003): GStreamer-WARNING **: 11:24:28.567: Name 'source_vidconv' is not unique in bin 'source-bin-00', not adding
In cb_newpad
###Decodebin did not pick nvidia decoder plugin.
Frame Number = 0 Person Count = 3
End of stream
Returned, stopping playback
Average fps 0.000233
Totally 3 persons are inferred
Deleting pipeline
nvidia@ubuntu:~/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app$ 

Hi,

Here is where I am stuck at tao-converter
I keep using the same key. What may be the problem? Thx

nvidia@ubuntu:~/Downloads/files$ docker login nvcr.io
Username: $oauthtoken
Password: 
WARNING! Your password will be stored unencrypted in /home/nvidia/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Then, I went to that json and found another key…

nvidia@ubuntu:~/Downloads/files$ ./tao-converter -k JG9hdXRodG9rZW46TjJoa1lXZHpkV2hsWVRkeWNHRTViRGMwTURGcmRHMHdZM1E2TldJNVlUSTVNakF0Tm1Gall5MDBZVEl5TFRrMFpHRXRaalJsTmpVM1xxxxxxxxxx(Hidden)  bodyposenet_deployable_v1.0.1/model.etlt
[INFO] [MemUsageChange] Init CUDA: CPU +303, GPU +0, now: CPU 327, GPU 9635 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +403, GPU +378, now: CPU 749, GPU 10033 (MiB)
[INFO] ----------------------------------------------------------------
[INFO] Input filename:   /tmp/filevMHXE5
[INFO] ONNX IR version:  0.0.0
[INFO] Opset version:    0
[INFO] Producer name:    
[INFO] Producer version: 
[INFO] Domain:           
[INFO] Model version:    0
[INFO] Doc string:       
[INFO] ----------------------------------------------------------------
[INFO] Model has no dynamic shape.
[ERROR] 4: [network.cpp::validate::2722] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine
Segmentation fault (core dumped)
nvidia@ubuntu:~/Downloads/files$ 

model.etlt (64.1 MB)

Hi,

It is kind of working I guess, Still waiting…

nvidia@ubuntu:~/Downloads/files$ # Set dimensions of desired output model for inference/deployment
nvidia@ubuntu:~/Downloads/files$ INPUT_SHAPE=288x384x3
nvidia@ubuntu:~/Downloads/files$ # Set input name
nvidia@ubuntu:~/Downloads/files$ INPUT_NAME=input_1:0
nvidia@ubuntu:~/Downloads/files$ # Set opt profile shapes
nvidia@ubuntu:~/Downloads/files$ MAX_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$ OPT_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$ 
nvidia@ubuntu:~/Downloads/files$ # Convert to TensorRT engine (FP16).
nvidia@ubuntu:~/Downloads/files$ tao converter /home/nvidia/Downloads/files/bodyposenet_deployable_v1.0.1/model.etlt
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
2022-08-23 11:55:25,149 [INFO] root: Registry: ['nvcr.io']
2022-08-23 11:55:25,242 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-08-23 11:55:25,245 [INFO] tlt.components.docker_handler.docker_handler: The required docker doesn't exist locally/the manifest has changed. Pulling a new docker.
2022-08-23 11:55:25,245 [INFO] tlt.components.docker_handler.docker_handler: Pulling the required container. This may take several minutes if you're doing this for the first time. Please wait here.
...
Pulling from repository: nvcr.io/nvidia/tao/tao-toolkit-tf


Can you find body2dout.jpg in the folder?

Hi,
I search the whole disk.
I found an empty one.
And I did change the name to body2dout1, cannot find it neither…

nvidia@ubuntu:~/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app$ ls -l
total 35144
-rw-rw-r-- 1 nvidia nvidia        0 Aug  3 17:03 body2dout.jpg
-rw-rw-r-- 1 nvidia nvidia      712 Aug  3 15:59 bodypose2d_app_config.yml
-rw-rw-r-- 1 nvidia nvidia     6569 Aug  3 15:59 bodypose2d_pipeline.png
-rwxrwxr-x 1 nvidia nvidia   425936 Aug  3 16:28 deepstream-bodypose2d-app
-rw-rw-r-- 1 nvidia nvidia    46293 Aug  3 15:59 deepstream_bodypose2d_app.cpp
-rw-rw-r-- 1 nvidia nvidia   744320 Aug  3 16:28 deepstream_bodypose2d_app.o
-rw-rw-r-- 1 nvidia nvidia     4129 Aug  3 15:59 deepstream_bodypose2d_meta.cpp
-rw-rw-r-- 1 nvidia nvidia    62184 Aug  3 16:28 deepstream_bodypose2d_meta.o
-rw-rw-r-- 1 nvidia nvidia     1982 Aug  3 15:59 ds_bodypose2d_meta.h
-rw-rw-r-- 1 nvidia nvidia   425528 Aug  3 16:28 ds_yml_parse.o
-rw-rw-r-- 1 nvidia nvidia     3029 Aug  3 15:59 Makefile
-rw-rw-r-- 1 nvidia nvidia     4008 Aug  3 15:59 README.md
-rw-r--r-- 1 root   root   12180381 Aug 22 11:38 test.264
-rw-r--r-- 1 root   root   22060864 Aug 22 13:09 test.mp4

For the tao-converter, still has error…
I have added KEY=my own key before

nvidia@ubuntu:~/Downloads/files$ # Set dimensions of desired output model for inference/deployment
nvidia@ubuntu:~/Downloads/files$ INPUT_SHAPE=288x384x3
nvidia@ubuntu:~/Downloads/files$ # Set input name
nvidia@ubuntu:~/Downloads/files$ INPUT_NAME=input_1:0
nvidia@ubuntu:~/Downloads/files$ # Set opt profile shapes
nvidia@ubuntu:~/Downloads/files$ MAX_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$ OPT_BATCH_SIZE=1
nvidia@ubuntu:~/Downloads/files$ 
nvidia@ubuntu:~/Downloads/files$ # Convert to TensorRT engine (FP16).
nvidia@ubuntu:~/Downloads/files$ tao converter /workspace/tao-experiments/bpnet/models/exp_m1_final/bpnet_model.deploy.etlt -k $KEY -t fp16 -e /workspace/tao-experiments/bpnet/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.fp16.deploy.engine -p ${INPUT_NAME},1x$INPUT_SHAPE,${OPT_BATCH_SIZE}x$INPUT_SHAPE,${MAX_BATCH_SIZE}x$INPUT_SHAPE
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
2022-08-23 12:18:41,400 [INFO] root: Registry: ['nvcr.io']
2022-08-23 12:18:41,486 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-08-23 12:18:41,496 [INFO] root: No mount points were found in the /home/nvidia/.tlt_mounts.json file.
2022-08-23 12:18:41,496 [WARNING] tlt.components.docker_handler.docker_handler: 
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/nvidia/.tlt_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
Error response from daemon: Container 30459d860c80fd8976cbd3bb551385b1d9ddca99022a5fd651dd03ea49a98f95 is not running
2022-08-23 12:18:42,269 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
Traceback (most recent call last):
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 259, in _raise_for_status
    response.raise_for_status()
  File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/30459d860c80fd8976cbd3bb551385b1d9ddca99022a5fd651dd03ea49a98f95/stop

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/nvidia/.local/bin/tao", line 8, in <module>
    sys.exit(main())
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/entrypoint/entrypoint.py", line 113, in main
    local_instance.launch_command(
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/instance_handler/local_instance.py", line 319, in launch_command
    docker_handler.run_container(command)
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 316, in run_container
    self.stop_container()
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 323, in stop_container
    self._container.stop()
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/models/containers.py", line 436, in stop
    return self.client.api.stop(self.id, **kwargs)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/container.py", line 1167, in stop
    self._raise_for_status(res)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found ("No such container: 30459d860c80fd8976cbd3bb551385b1d9ddca99022a5fd651dd03ea49a98f95")
nvidia@ubuntu:~/Downloads/files$ 

For tao-converter , please use below.

tao-converter /tao_models/bpnet_model/bpnet.etlt \
	-k nvidia_tlt \
    -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 \
	-t fp16 \
	-m 16 \
	-e trt.engine

Hi,

Still error…

nvidia@ubuntu:~/Downloads/files$ tao converter /home/nvidia/Downloads/files/bodyposenet_deployable_v1.0.1/bpnet_model.deploy.etlt -k nvidia_tlt     -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 -t fp16 -m 16 -e trt.engine
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
2022-08-23 12:26:45,634 [INFO] root: Registry: ['nvcr.io']
2022-08-23 12:26:45,721 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
2022-08-23 12:26:45,740 [INFO] root: No mount points were found in the /home/nvidia/.tlt_mounts.json file.
2022-08-23 12:26:45,740 [WARNING] tlt.components.docker_handler.docker_handler: 
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/nvidia/.tlt_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
Error response from daemon: Container 27f76784a36be87ef687dcc60628ef011fc19be3a139cf2d69b180441c0d9a8b is not running
2022-08-23 12:26:46,622 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
Traceback (most recent call last):
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 259, in _raise_for_status
    response.raise_for_status()
  File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/27f76784a36be87ef687dcc60628ef011fc19be3a139cf2d69b180441c0d9a8b/stop

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/nvidia/.local/bin/tao", line 8, in <module>
    sys.exit(main())
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/entrypoint/entrypoint.py", line 113, in main
    local_instance.launch_command(
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/instance_handler/local_instance.py", line 319, in launch_command
    docker_handler.run_container(command)
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 316, in run_container
    self.stop_container()
  File "/home/nvidia/.local/lib/python3.8/site-packages/tlt/components/docker_handler/docker_handler.py", line 323, in stop_container
    self._container.stop()
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/models/containers.py", line 436, in stop
    return self.client.api.stop(self.id, **kwargs)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/container.py", line 1167, in stop
    self._raise_for_status(res)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/home/nvidia/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found ("No such container: 27f76784a36be87ef687dcc60628ef011fc19be3a139cf2d69b180441c0d9a8b")
nvidia@ubuntu:~/Downloads/files$ 


And I tried to use root

Error: no input dimensions given
root@ubuntu:/home/nvidia/Downloads/files# ./tao-converter /home/nvidia/Downloads/files/bodyposenet_deployable_v1.0.1/bpnet_model.deploy.etlt -k nvidia_tlt     -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 -t fp16 -m 16 -e trt.engine
Error: no input dimensions given
root@ubuntu:/home/nvidia/Downloads/files# 

I will check on my orin and will give you update soon.

No issue when use tao-converter to generate tensort engine on my side.
My steps:
$ wget ‘https://api.ngc.nvidia.com/v2/models/nvidia/tao/bodyposenet/versions/deployable_v1.0.1/files/model.etlt’ (according to BodyPoseNet | NVIDIA NGC)
$ wget --content-disposition ‘https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.22.05_trt8.4_aarch64/files/tao-converter’ (according to TAO Converter | NVIDIA NGC)
$ chmod +x tao-converter
$ ./tao-converter model.etlt -k nvidia_tlt -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 -t fp16 -m 16 -e trt.engine

Dear Morganh,

Yours can run, thx. I tried my bpnet_model.etlt and it has created the engine. :>
And now, I got the trt.engine, how to use it in deepstream? (I am looking at this link, but it only said about the pre-trained models, Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation)
And can I convert it to onnx or something so that I can use the model in dusty inference? Thx

Btw, in my picture, it has engine file already, does it mean I don’t need to run tao-converter? Thx

Hi,

The reason why I need bpnet because I want to use dusty-nv inference.
And we want to train the pose model for better result.
Btw, I have created a forum, How to use the trt engine in dusty nv posenet?

Thx

The default 1.4.0 notebook should not contain the .engine files.
You can wget it again to double check.

And for running bpnet in deepstream, please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

BTW, make sure you download the models according to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao3.0_ds6.1ga

And also please use similar command as below to run. I can run successfully in my Orin.
$ ./deepstream-bodypose2d-app 1 …/…/…/configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 file:///home/nvidia/morgan/bpnet/deepstream_tao_apps_master/apps/tao_others/deepstream-bodypose2d-app/original-image.png ./body2dout

Dear Morganh,

Yes, I saw the body2out.jpg, it is same as yours. thx.
I am still trying to figure out how to use the trt.engine in the deepstream…
The link, GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
only shows how to run it, but it does not say how to add my newly created trt.engine…

And how to run it real time? I have a c920 webcam

Thx

Modify this line.

https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/bodypose2d_tao/bodypose2d_pgie_config.txt#L51

Thanks. it is using my engine now.

I only have two questions left

  1. How to run the deepstream using webcam and rtsp? Still looking for the tutorial link…
  2. How to convert the engine into onnx?

Thx

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

For deepstream_tao_app, its current version does not support rtsp yet. If you want to implement it, you can refer to The LPD and LPR models in the TAO tool do not work well - #22 by Morganh for reference.

No, cannot convert .etlt model to onnx file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.