tao apps not working on ubuntu on desktop pc
I installed deepstream 6.2 on ubuntu on native desctop pc.
It works fine after running the following command.
deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.yml
However, when I try to run tao apps using docker, I get an error.
Why is this?
■ error message
/app/src/deepstream_tao_apps/apps/tao_others/deepstream-gesture-app# . /deepstream-gesture-app gesture_app_config.yml
One main element could not be created. Exiting.
■ tao apps repository
■ docker image
deepstream:6.2-triton
if (!pipeline || !streammux) {
g_printerr ("One main element could not be created. Exiting.\n");
return -1;
}
It looks like gst-plugins provide by DeepStream can’t create successfully.
Do you have nvidia-container-toolkit
installed ?
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html
yes.
I installed nvidia-container-toolkit on ubuntu.
but, nowk, error message appears bellow.
./deepstream-gesture-app gesture_app_config.yml
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.516: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_eglglessink.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.519: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_deepstream_bins.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.563: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.581: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_dsexample.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.586: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.587: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_nvblender.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.587: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libgstnvvideoconvert.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.587: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_dewarper.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.588: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.588: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libcustom2d_preprocess.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.588: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistream.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.588: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.588: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistreamtiler.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.588: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_preprocess.so': libcuda.so.1: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.599: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstmpeg2dec.so': libmpeg2.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.784: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstmpg123.so': libmpg123.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.798: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstmpeg2enc.so': libmpeg2encpp-2.1.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.800: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstchromaprint.so': libavcodec.so.58: cannot open shared object file: No such file or directory
(gst-plugin-scanner:95): GStreamer-WARNING **: 11:19:31.801: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenmpt.so': libmpg123.so.0: cannot open shared object file: No such file or directory
One main element could not be created. Exiting.
ll /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
-rwxr-xr-x 1 root root 285688 Jan 13 2023 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so*
in docker container, I run this command bellow.
sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
Wed Jul 26 11:22:08 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:0F:00.0 On | N/A |
| 0% 49C P5 16W / 125W | 588MiB / 6144MiB | 29% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Looks like driver is not properly installed.
libcuda.so
is CUDA userspace driver shared between host and docker.
Is it safe to install CUDA 12.2 according to the following?
Do I need 11.8 as per sudo apt-get install cuda-toolkit-11-8?
Or is it not related to CUDA itself package?
CUDA Toolkit 12.2 Downloads | NVIDIA Developer type=deb_network
Does the version of cuda used by nvcc below need to be 12.0?
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
I fixed the environment variable and it cured it.
However, tao apps still does not work.
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
Resolved.
However, the error appeared again.
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307 , condition: mObjectCounter.use_count() == 1. Destroying the builder object before destroying the objects created by the builder object will result in undefined behavior.
)
WARNING: [TRT]: onnx2trt_utils.cpp:377: ONNX models are generated with INT64 weights, but TensorRT does not natively support INT64, trying to cast down to INT32.
It looks like an error occurred while converting the model.
Make sure the model is loaded and converted on the same device.
Thanks for your reply.
Make sure the model is loaded and converted on the same device.
you say the gesture.etlt_b8_gpu0_int8.engine file in gesture_sgie_config.yml?
is it correct?
Do I need to generate the engine file by tao-converter on the ubuntu machine beforehand?
No need.
*.engine file will be automatic generated by DeepStream.
I mean don’t copy engfile file from other machine.
You can delete the gesture.etlt_b8_gpu0_int8.engine file and regenerate
Yes.
I tried deleting the engine file and then tried again but I get the same error.
./deepstream-bodypose2d-app bodypose2d_app_config.yml
Request sink_0 pad from streammux
Request sink_1 pad from streammux
!! [WARNING] Unknown param found : type
!! [WARNING] Unknown param found : enc
!! [WARNING] Unknown param found : udpport
!! [WARNING] Unknown param found : rtspport
!! [WARNING] Unknown param found : filename
total 1 item
group model-config found 0
Now playing: NVIDIA_VISIBLE_DEVICES=all
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /tmp/deepstream_tao_apps/configs/bodypose2d_tao/…/…/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine open error
0:00:01.648338079 1336 0x55cfd0837d00 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/tmp/deepstream_tao_apps/configs/bodypose2d_tao/…/…/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine failed
0:00:01.674651140 1336 0x55cfd0837d00 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/tmp/deepstream_tao_apps/configs/bodypose2d_tao/…/…/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine failed, try rebuild
0:00:01.674669380 1336 0x55cfd0837d00 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
^C
Is there a possibility of insufficient GPU specifications?
nvidia-smi
Thu Jul 27 22:18:48 2023
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:0F:00.0 On | N/A |
| 38% 57C P2 55W / 125W | 2416MiB / 6144MiB | 100% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1127 G /usr/lib/xorg/Xorg 35MiB |
| 0 N/A N/A 2251 G /usr/lib/xorg/Xorg 212MiB |
| 0 N/A N/A 2380 G /usr/bin/gnome-shell 27MiB |
| 0 N/A N/A 3828 G /usr/lib/firefox/firefox 146MiB |
| 0 N/A N/A 8802 G …RendererForSitePerProcess 2MiB |
| 0 N/A N/A 12816 G …RendererForSitePerProcess 31MiB |
| 0 N/A N/A 42520 C ./deepstream-bodypose2d-app 1946MiB |
±----------------------------------------------------------------------------+
heartrate seems to work.
./deepstream-heartrate-app heartrate_app_config.yml
Now playing: (null)
Inside Custom Lib : Setting Prop Key=config-file Value=…/…/…/…/configs/heartrate_tao/sample_heartrate_model_config.yml
0:00:00.156364124 43751 0x55ad72cacc30 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:01.760152060 43751 0x55ad72cacc30 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/user/workspace/nvidia/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x416x736
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46
2 OUTPUT kFLOAT output_cov/Sigmoid 1x26x46
0:00:01.787152385 43751 0x55ad72cacc30 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/user/workspace/nvidia/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
0:00:01.788206733 43751 0x55ad72cacc30 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/home/user/workspace/nvidia/deepstream_tao_apps/configs/facial_tao/config_infer_primary_facenet.yml sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: source
Decodebin child added: decodebin1
Running…
Decodebin child added: qtdemux0
Decodebin child added: qtdemux1
Decodebin child added: multiqueue0
Decodebin child added: multiqueue1
Decodebin child added: h264parse0
Decodebin child added: h264parse1
Decodebin child added: capsfilter0
Decodebin child added: capsfilter1
Decodebin child added: aacparse0
Decodebin child added: aacparse1
Decodebin child added: avdec_aac0
Decodebin child added: avdec_aac1
Decodebin child added: nvv4l2decoder0
Decodebin child added: nvv4l2decoder1
In cb_newpad
###Decodebin pick nvidia decoder plugin.
In cb_newpad
In cb_newpad
###Decodebin pick nvidia decoder plugin.
In cb_newpad
HeartRate model config file: heartrate.engine
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 10 weights are affected by this issue: Detected subnormal FP16 values.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT motion_input:0 3x72x72 min: 1x3x72x72 opt: 16x3x72x72 Max: 16x3x72x72
1 INPUT kFLOAT appearance_input:0 3x72x72 min: 1x3x72x72 opt: 16x3x72x72 Max: 16x3x72x72
2 OUTPUT kFLOAT lambda_1/Squeeze:0 0 min: 0 opt: 0 Max: 0
deepstream-heartrate-app: nvdsinfer_context_impl.cpp:1421: NvDsInferStatus nvdsinfer::NvDsInferContextImpl::allocateBuffers(): Assertion `bindingDims.numElements > 0’ failed.
Aborted (core dumped)
try
deepstream-app --version-all
It may be caused by a mismatch between the driver and the cuda version
Is this normal?
Or is it a mismatch?
deepstream-app --version-all
deepstream-app version 6.2.0
DeepStreamSDK 6.2.0
CUDA Driver Version: 12.0
CUDA Runtime Version: 11.8
TensorRT Version: 8.5
cuDNN Version: 8.7
libNVWarp360 Version: 2.0.1d3
Does that mean I have to match the cuda driver version with the cuda version?
If so, how do I change the version of the cuda driver?
CUDA Driver Version: 12.0 → 11.8 // do I need to change to 11.8 ?
CUDA Runtime Version: 11.8
It should be a bug about heartrate-app
.
Here is a patch,You can try it .
out.patch (2.0 KB)
Thanks.