Getting an error while trying to use deepstream_launchpad.ipynb code

Hi,

I am facing an issue while trying to read the tensorrt engine file for the deepstream_launchpad.ipynb code. I am trying to use my .engine file within the dslaunchpad_pgie_config.txt file and it does not seem to recognise the file, the error below:

0:00:00.758867392 82130     0x1fc662d0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
ERROR: [TRT]: 1: [pluginV2Runner.cpp::load::299] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::65] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_textarea.engine
0:00:03.250602720 82130     0x1fc662d0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_textarea.engine failed
0:00:03.439277280 82130     0x1fc662d0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_textarea.engine failed, try rebuild
0:00:03.439323328 82130     0x1fc662d0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:05.135518336 82130     0x1fc662d0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
0:00:05.305624128 82130     0x1fc662d0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:00:05.305671168 82130     0x1fc662d0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:00:05.305735712 82130     0x1fc662d0 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:05.305771296 82130     0x1fc662d0 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary-inference> error: Config file path: ./configs/dslaunchpad_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: ./configs/dslaunchpad_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

This is the config file ( dslaunchpad_pgie_config.txt):

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
model-engine-file=../../../../samples/models/Primary_Detector/side_textarea.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

Board: Jetson AGX Xavier
Cuda: 11.4
Deepstream 6.3

Additionally, I followed this link for the config params: Gst-nvinfer — DeepStream documentation 6.4 documentation
Any help would be appreciated, thanks.

Regards,
Neville

I was able to run the deepstream_test_1.ipynb code successfully by changing the config with my own model.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/side_textarea.engine
#labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2

[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.8

But I do not think deepstream_test_1.ipynb supports RTSP live source stream.

So to add on to this issue, I have decided to use the deepstream_test1_rtsp_in_rtsp_out.py code for inputting a RTSP input and getting the RTSP output, but I still face the same error as before, this is the config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
model-engine-file=../../../../samples/models/Primary_Detector/side_textarea.engine
#labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

Again, it works perfectly fine when all the parameters are inputted but when I want to use a tensorrt model only, does not seem to work. I just want to know what model is being used by the program in the backend because the documentation clearly states what to keep and remove for using .engine models.

How did you get this model? For DeepStream/TenorRT, *.engine file is related to hardware, batch size, etc., and needs to be generated at runtime.

Thanks for the reply, so I am using a YOLOv5 model from Ultralytics for object detection. I used their conversion script from a .pt model to a .engine model. I have done this conversion in the Jetson AGX Xavier board itself. After the conversion, I have just called the model in the config file.

YoloV5: GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
The conversion script: yolov5/export.py at master · ultralytics/yolov5 · GitHub

Thank you

Regards,
Neville

When running the model in Deepstream, the model parameters must be written into the nvinfer configuration file.

Here is a sample code of yolov5.

Thanks for the reply. So I am guessing I need to run the export.py script or just take a look at the config folder? There are many config files, does it depend on the use case of which to use?

Do I just have to use my own .engine model and take the params from the repo?

Neville

The generation of .engine model is related to the configuration file, so you’d better use deepstream to generate .engine model to avoid some problems.

  1. Use export.py to export onnx model
  2. Then refer to this configuration file
  3. There are steps in the README to run deepstream-app

Okay, will try that out and let you know. Also, once the whole process is completed, can I go back to the deepstream_test1_rtsp_in_rtsp_out.py code and use that with the onnx model? Since it does provide the necessary optimizations for RTSP input and output.

It should be possible, but before that, please make sure the sample works properly.

Alright, thanks for your support, will try it for the deepstream samples first and will inform on this forum post.

Okay so I just followed the repository steps while trying to convert the model to onnx but there is some issues while trying to use my own model for conversion. However, while trying to convert the standalone yolov5s.pt model, it works successfully. Just want to add on for someone who faces the same issue, you need to follow this issue: AttributeError:Can't get attribute 'DetectionModel' on <module 'models.yolo' from '/home/agx/yolov5_d455/realsense-D455-YOLOV5-master/models/yolo.py'> · Issue #11688 · ultralytics/yolov5 · GitHub

You have to add DetectionModel = Model under the Model class in yolov5/models/yolo.py file, then the following error below will be solved:

/usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
/usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
export: data=data/coco128.yaml, weights=['side_new_model.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=True, simplify=True, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx']
YOLOv5 🚀 v6.1-243-gafec2f3 Python-3.8.10 torch-2.0.0+nv23.05 CPU

Traceback (most recent call last):
  File "export.py", line 663, in <module>
    main(opt)
  File "export.py", line 658, in main
    run(**vars(opt))
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "export.py", line 542, in run
    model = attempt_load(weights, device=device, inplace=True, fuse=True)  # load FP32 model
  File "/media/nvidia/ipkknd/yolov5/models/experimental.py", line 80, in attempt_load
    ckpt = torch.load(attempt_download(w), map_location=device)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1165, in find_class
    return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'DetectionModel' on <module 'models.yolo' from '/media/nvidia/ipkknd/yolov5/models/yolo.py'>

Thanks for your share.

I am facing an issue when trying to run the deepstream sample, it seems the configuration file is not able to find. I am running the sample directly as is with the default model of yolov5s.onnx, followed the complete steps from the repo of yolov5_gpu_optimization.

This is the config_infer_primary_yoloV5.txt:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=./yolov5s.onnx
#model-engine-file=./yolov5s.onnx_b1_gpu1_fp16.engine
infer-dims=3;640;640
labelfile-path=labels.txt
batch-size=1
workspace-size=1024
network-mode=2
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=./yolov5_decode.so

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

This is the deepstream_app_config.txt:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=0
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=0
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config/config_infer_primary_yoloV5.txt

[tests]
file-loop=1

It seems that the path is wrong, modify

config-file=config/config_infer_primary_yoloV5.txt

to

config-file=config_infer_primary_yoloV5.txt

in deepstream_app_config.txt

This is what I got after modifying the config file path.

I have also done this step earlier as well:

nvcc -Xcompiler -fPIC -shared -o yolov5_decode.so ./yoloForward_nc.cu ./yoloPlugins.cpp ./nvdsparsebbox_Yolo.cpp -isystem /usr/include/aarch64-linux-gnu/ -L /usr/lib/aarch64-linux-gnu/ -I /opt/nvidia/deepstream/deepstream/sources/includes -lnvinfer 

Once I changed to an absolute path for the yolov5_decode.so file, I got another error showing up:

custom-lib-path=/media/nvidia/ipkknd/yolov5_gpu_optimization/deepstream-sample/yolov5_decode.so

It looks like there are some version issues

1.copy yolov5n.onnx & yolov5_decode.so to deepstream-sample/config folder.
2.modify the model engine file from gpu1 to gpu0 in config_infer_primary_yoloV5.txt

- model-engine-file=./yolov5n.onnx_b1_gpu1_fp16.engine
+ model-engine-file=./yolov5n.onnx_b1_gpu0_fp16.engine
  1. Modify deepstream_app_config_save_video.txt like below

-config-file=config/config_infer_primary_yoloV5.txt
+config-file=config_infer_primary_yoloV5.txt

-file-loop=1
+file-loop=0

4.cd yolov5_gpu_optimization/deepstream-sample/config

run

deepstream-app -c deepstream_app_config_save_video.txt

out.mp4 is the output result

Oh that’s great, seems to have generated the .engine file and it definitely runs the sample codes well. Is there any parameter for showing the display during inference? Also, since the .engine file has been generated, can I now utilize it in the original code from which I got the issue with?

add the following item in deepstream_app_config_save_video.txt

+[sink1]
+enable=1
+type=2
+sync=1
+gpu-id=0
+nvbuf-memory-type=0
+

It should be possible