Engine file and calib.table not saved in DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.5.0
• TensorRT Version 8.0.1.6
• Issue Type( questions, new requirements, bugs)
Can’t find anywhere .engine and calib.table after running DeepStream
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Follow the guide https://docs.ultralytics.com/tutorials/nvidia-jetson/#int8-calibration with the INT8 calibration method (that is, compile the library with OPENCV flag=1).
The configuration files have been modified by me and are:

config_infer_primary_yoloV5.txt

enable-dla=1
use-dla-core=0
net-scale-factor=0.0039215697906911373
labelfile-path=labels_coco.txt
custom-network-config=./models/basic/yolov5n.cfg
model-file=./models/basic/yolov5n.wts
model-engine-file=./models/basic/yolov5n_b4_dla0_int8.engine
# model-engine-file=yolov5n6_ori_b1_dla0_int8.engine
# int8-calib-file=/home/watchbot/Repo/DeepStream-Yolo/calib.table
# int8-calib-file=calib.table
int8-calib-file=./models/basic/calib.table
batch-size=4
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
network-type=0
num-detected-classes=1
interval=0
gie-unique-id=1
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/home/watchbot/Repo/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
# nms-iou-threshold=0.45 original value
pre-cluster-threshold=0.25
# pre-cluster-threshold=0.25 original value
topk=300

deepstream_app_config.txt

# Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=5
camera-csi-sensor-id=0
camera-width=640
camera-height=360
camera-fps-n=10
camera-fps-d=1
camera-v4l2-dev-node=0
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=5
camera-csi-sensor-id=1
camera-width=640
camera-height=360
camera-fps-n=10
camera-fps-d=1
camera-v4l2-dev-node=0
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=5
camera-csi-sensor-id=2
camera-width=640
camera-height=360
camera-fps-n=10
camera-fps-d=1
camera-v4l2-dev-node=0
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=5
camera-csi-sensor-id=3
camera-width=640
camera-height=360
camera-fps-n=10
camera-fps-d=1
camera-v4l2-dev-node=0
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0


[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

######[sink1]
######enable=0
######type=3
#######1=mp4 2=mkv
######container=1
#######1=h264 2=h265
######codec=1
######sync=0
#######iframeinterval=10
######bitrate=2000000
#######H264 Profile - 0=Baseline 2=Main 4=High
#######H265 Profile - 0=Main 1=Main10
######profile=0
######output-file=out.mp4
######source-id=0

######[sink2]
######enable=0
#######Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
######type=4
#######1=h264 2=h265
######codec=1
######sync=0
######bitrate=4000000
#######H264 Profile - 0=Baseline 2=Main 4=High
#######H265 Profile - 0=Main 1=Main10
######profile=0
####### set below properties in case of RTSPStreaming
######rtsp-port=8554
######udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
buffer-pool-size=4
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
# model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
model-engine-file=models/basic/yolov5n_b4_dla0_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Inside my Jetson, I am running the following command:
deepstream-app -c deepstream_app_config.txt, inside
/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app. Here I created a folder “models” and subfolders, depending on the trials I do. In this case, I am using pretrained YOLOV5-N weights, whose yolov5n.wts and yolov5n.cfg are located at
/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/models/basic

Everytime I run this, it says:
File does not exist: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/models/basic/calib.table
and begins loading images for calibration (which takes a while).
Once it finishes, it finally says:

Building complete

ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/model_b4_dla0_int8.engine opened error
0:07:10.176037264 13062      0xc4a7920 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1942> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/model_b4_dla0_int8.engine

And despite it works successfully, after quitting, If I want to search the engine file I defined previously in my txt file, it is nowhere, same with the calibration file.
I need both files, what can I do?
Thank you for your time.

What’s the Jetson module are you using, nano/xavier/nx?

If you have pre-generated serialized engine file for the model, you can specify it by “model-engine-file” either in deepstream_app config file or the corresponding pgie/sgie config file.

If you don’t have the pre-generated file, it will be generated in the same directory as your model file. But the model-file you set seems to be a pytorch format (wts) which is not supported. You can convert it to ONNX format (specified by onnx-file) or tensorrt engine (then use model-engine-file) in order to use it in DeepStream.

Can you search file with postfix “.engine” in your deepstream directory?
In your case there is no pre-generated engine file, and no valid model defined by model-file, but your program can still work successfully, I’m not sure if there is other engine file being used by your program, need you to double-check.

It is NX.
I don’t have any engine file. That’s what I am looking for, indeed. Same with the calib.table, in order not to create it everytime I execute deepstream-app -c deepstream_app_config.txt.
I don’t fully understand when you said, “convert it to tensorrt engine”, I was expecting Deepstream to do it directly when running deepstream-app -c deepstream_app_config.txt.
I will try to export it to ONNX and let you know.

-------UPDATE-------
I added in config_infer_primary_yoloV5.txt the flag onnx-file, and I get:

0:00:01.212635810 18436      0x1ccc720 WARN                     omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /home/watchbot/.config:/etc/xdg/xdg-unity:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/models/basic/yolov5n_b4_dla0_int8.engine open error
0:00:03.011958049 18436      0x1ccc720 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/models/basic/yolov5n_b4_dla0_int8.engine failed
0:00:03.012069058 18436      0x1ccc720 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/models/basic/yolov5n_b4_dla0_int8.engine failed, try rebuild
0:00:03.012111843 18436      0x1ccc720 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
YOLO config file or weights file is not specified

ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.012937387 18436      0x1ccc720 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:03.012992556 18436      0x1ccc720 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:03.013025260 18436      0x1ccc720 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:03.013085133 18436      0x1ccc720 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:03.013120909 18436      0x1ccc720 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:03.013308847 18436      0x1ccc720 WARN                GST_PADS gstpad.c:1149:gst_pad_set_active:<primary_gie:sink> Failed to activate pad
** ERROR: <main:658>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

The compiled file provided by https://docs.ultralytics.com/tutorials/nvidia-jetson/#deepstream-configuration-for-yolov5 demands the .wts and .cfg files only, not the onnx file.

You are right, .wts can be set as model-file directly in config_infer_primary_yoloV5.txt (I had thought Pytorch weight cannot be used in this way, sorry for my misinformation above). I checked with deepstream docker (nvcr.io/nvidia/deepstream:6.1.1-devel) and Jetson (6.1.1) and both can work correctly, the engine file can be created in the directory DeepStream-Yolo:

root@Colorful:/workspace/DeepStream-Yolo# ll *.engine
-rw-r--r-- 1 root root 33548885 Dec 23 02:49 model_b1_gpu0_fp32.engine

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The samples of Yolovx model configuration is in /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.