Tao Deploying to DeepStream for YOLOv4-tiny

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Nano
• DeepStream Version: 6.0.1
• JetPack Version: 4.6.1
• TensorRT Version: 8.2.1-1+cuda10.2
• Issue Type : Issue with DeepStream Configuration for YOLOv4-Tiny Tao Model

I recently trained a YOLOv4-Tiny model using the TAO Toolkit, and I successfully exported the trained model using the following command:

!rm -rf $LOCAL_EXPERIMENT_DIR/export
!mkdir -p $LOCAL_EXPERIMENT_DIR/export

!tao yolo_v4_tiny export -m /workspace/tao-experiments/yolo_v4_tiny/experiment_dir_unpruned/weights/yolov4_cspdarknet_tiny_epoch_005.tlt
-o /workspace/tao-experiments/yolo_v4_tiny/export/yolov4_cspdarknet_tiny_epoch_005.onnx
-e $SPECS_DIR/yolo_v4_tiny_train_kitti.txt
–target_opset 12
–gen_ds_config
-k nvidia_tlt

The export process completed successfully, and I received the following output:
2023-08-20 15:27:36,524 [INFO] root: Registry: [‘nvcr.io’]
2023-08-20 15:27:36,732 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5
2023-08-20 15:27:37,224 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/alinailgr/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
Using TensorFlow backend.
2023-08-20 12:28:00.923160: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
2023-08-20 12:28:40,089 [INFO] iva.common.export.keras_exporter: Using input nodes: [‘Input’]
2023-08-20 12:28:40,089 [INFO] iva.common.export.keras_exporter: Using output nodes: [‘BatchedNMS’]
The ONNX operator number change on the optimization: 320 → 158
2023-08-20 12:28:58,210 [INFO] keras2onnx: The ONNX operator number change on the optimization: 320 → 158
Telemetry data couldn’t be sent, but the command ran successfully.
[WARNING]: init() missing 4 required positional arguments: ‘code’, ‘msg’, ‘hdrs’, and ‘fp’
Execution status: PASS
2023-08-20 15:34:13,516 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

The exported files can be accessed through this deneme-export - Google Drive

However, I am encountering issues when creating a configuration file for DeepStream. Below is the configuration file I’ve created:

[property]
gpu-id=0
labelfile-path=/home/tosso/Documents/tosso_koctas/utils/models/labels/gap_label.txt
onnx-file=/home/tosso/Documents/tosso_koctas/utils/models/weights/yolov4_cspdarknet_tiny_epoch_005.onnx
#maintain-aspect-ratio=1
batch-size=1
network-mode=0
interval=0
gie-unique-id=1
#no cluster
cluster-mode=3

net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;384;1248
tlt-model-key=nvidia_tlt
network-type=0
num-detected-classes=7
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0

parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/tosso/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

When running the code, I encountered the following error:

tosso@tosso:~/Documents/tosso_koctas$ python3 video.py
Creating Pipeline

Adding elements to Pipeline

Creating source bin
Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 54]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 54]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
[NvMultiObjectTracker] Initialized
0:00:00.354971223 13540 0x1f970a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: [network.cpp::getInput::1755] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::getInput::1755, condition: index < getNbInputs()
)
Segmentation fault (core dumped)

I appreciate any assistance you can provide in resolving this issue. Thank you in advance for your support!

I can’t open the model yolov4_cspdarknet_tiny_epoch_005.onnx by Netron, did you test your model by thirdpart tools? can it give correct inference results?

I’ve completed model inference using the TAO Toolkit. I utilized the following command for the inference process:
!tao yolo_v4_tiny inference -i /workspace/tao-experiments/data/test_samples
-e $SPECS_DIR/yolo_v4_tiny_train_kitti.txt
-m /workspace/tao-experiments/yolo_v4_tiny/experiment_dir_unpruned/weights/yolov4_cspdarknet_tiny_epoch_005.tlt
-o /workspace/tao-experiments/yolo_v4_tiny/out
-k nvidia_tlt

The inference process generated the following output:

2023-08-24 13:31:28,023 [INFO] root: Registry: [‘nvcr.io’]
2023-08-24 13:31:28,100 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5
2023-08-24 13:31:28,121 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/alinailgr/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
Using TensorFlow backend.
2023-08-24 10:31:30.389043: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

2023-08-24 10:31:38,676 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.

2023-08-24 10:31:38,676 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2023-08-24 10:31:38,679 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

2023-08-24 10:31:40,133 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

2023-08-24 10:31:40,162 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

2023-08-24 10:31:40,191 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/third_party/keras/tensorflow_backend.py:183: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

2023-08-24 10:31:40,366 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/third_party/keras/tensorflow_backend.py:183: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

2023-08-24 10:31:40,703 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

2023-08-24 10:31:41,021 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2023-08-24 10:31:41,021 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2023-08-24 10:31:41,022 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

2023-08-24 10:31:41,185 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

2023-08-24 10:31:41,186 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

2023-08-24 10:31:41,458 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

2023-08-24 10:31:41,875 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.

2023-08-24 10:31:43,792 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

2023-08-24 10:31:44,279 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

Using TLT model for inference, setting batch size to the one in eval_config: 4
100%|███████████████████████████████████████████| 13/13 [00:11<00:00, 1.15it/s]
Telemetry data couldn’t be sent, but the command ran successfully.
[WARNING]: init() missing 4 required positional arguments: ‘code’, ‘msg’, ‘hdrs’, and ‘fp’
Execution status: PASS
2023-08-24 13:32:01,645 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

You can access the results of the inference process via this 7020fca7-fa24-4849-a67e-b945ec8298e6.jpeg - Google Drive

when i use onnx model;

!tao yolo_v4_tiny inference -i /workspace/tao-experiments/data/test_samples
-e $SPECS_DIR/yolo_v4_tiny_train_kitti.txt
-m /workspace/tao-experiments/yolo_v4_tiny/export/yolov4_cspdarknet_tiny_epoch_005.onnx
-o /workspace/tao-experiments/yolo_v4_tiny/out
-k nvidia_tlt

the output;

2023-08-24 13:48:45,863 [INFO] root: Registry: [‘nvcr.io’]
2023-08-24 13:48:45,931 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5
2023-08-24 13:48:45,957 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/alinailgr/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
Using TensorFlow backend.
2023-08-24 10:48:48.601941: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn’t match a supported version!
RequestsDependencyWarning)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

2023-08-24 10:48:57,514 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.

2023-08-24 10:48:57,514 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2023-08-24 10:48:57,517 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

[08/24/2023-10:48:58] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReader::32] Error Code 1: Serialization (Serialization assertion magicTagRead == kMAGIC_TAG failed.Magic tag does not match)
[08/24/2023-10:48:58] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
Traceback (most recent call last):
File “</usr/local/lib/python3.6/dist-packages/iva/yolo_v4/scripts/inference.py>”, line 3, in
File “”, line 224, in
File “”, line 707, in return_func
File “”, line 695, in return_func
File “”, line 220, in main
File “”, line 203, in inference
File “”, line 43, in init
File “”, line 32, in init
AttributeError: ‘NoneType’ object has no attribute ‘max_batch_size’
Exception ignored in: <bound method TRTInferencer.del of <iva.common.inferencer.trt_inferencer.TRTInferencer object at 0x7fcd4809a320>>
Traceback (most recent call last):
File “”, line 139, in del
File “”, line 96, in clear_trt_session
AttributeError: ‘TRTInferencer’ object has no attribute ‘context’
Telemetry data couldn’t be sent, but the command ran successfully.
[WARNING]: init() missing 4 required positional arguments: ‘code’, ‘msg’, ‘hdrs’, and ‘fp’
Execution status: FAIL
2023-08-24 13:49:00,109 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

1 Like

@alinail56
I will change this topic into TAO forum.

Since you are running in TAO 4.0.0. In 4.0.0 version, the exported file is an etlt file.

Could you modify the command line from xxx.onnx to xxx.etlt ?

Or you can run the same command line via TAO 5.0 instead. Because in TAO 5.0, it will export to an onnx file.
$ docker run --runtime=nvidia -it --rm [nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5](http://nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5) /bin/bash

Then, run commands without “tao” in the beginning.
yolo_v4_tiny export xxx

1 Like

I’ve been working with the TAO Toolkit 4.0.0 and managed to export an .etlt model. I successfully created the trt.engine file using !tao-deploy yolo_v4_tiny gen_trt_engine.

However, I’ve encountered challenges while attempting to deploy the generated .etlt model with DeepStream. Here’s the configuration file I’m using:
"[property]
gpu-id=0
labelfile-path=/home/tosso/Documents/tosso_koctas/utils/models/labels/gap_label.txt
model-engine-file=/home/tosso/Documents/tosso_koctas/utils/models/weights/yolov4_cspdarknet_tiny_epoch_005.etlt
#model-engine-file=/home/tosso/Downloads/trt.engine
#maintain-aspect-ratio=1
batch-size=1
#0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
interval=0
gie-unique-id=1
#is-classifier=0
#no cluster
cluster-mode=3

net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;384;1248
tlt-model-key=nvidia_tlt
network-type=0
num-detected-classes=7
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0

parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/tosso/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so
#custom-lib-path=/home/tosso/Documents/tosso_koctas/utils/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0"

However, upon running the deployment script, I encountered the following output:
"tosso@tosso:~/Documents/tosso_koctas$ python3 video.py
Creating Pipeline

Adding elements to Pipeline

Creating source bin
Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 54]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 54]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
[NvMultiObjectTracker] Initialized
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::30] Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /home/tosso/Documents/tosso_koctas/utils/models/weights/yolov4_cspdarknet_tiny_epoch_005.etlt
0:00:03.036166521 10841 0x1a410ea0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/tosso/Documents/tosso_koctas/utils/models/weights/yolov4_cspdarknet_tiny_epoch_005.etlt failed
0:00:03.037315819 10841 0x1a410ea0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/tosso/Documents/tosso_koctas/utils/models/weights/yolov4_cspdarknet_tiny_epoch_005.etlt failed, try rebuild
0:00:03.037363007 10841 0x1a410ea0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:03.516470872 10841 0x1a410ea0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:03.517640378 10841 0x1a410ea0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:03.517689806 10841 0x1a410ea0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:03.518078669 10841 0x1a410ea0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.518115024 10841 0x1a410ea0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: utils/pgie_yolov4_tiny_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: utils/pgie_yolov4_tiny_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app"

Glad to know the .etlt file is generated.

Your setting is not correct. For etlt file, need to use tlt-encoded-model=xxx.etlt. Please deploy etlt file instead.
Refer to https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/nvinfer/yolov4-tiny_tao/pgie_yolov4_tiny_tao_config.txt#L31-L32

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.