The conversion of yolov7.weights to an engine has failed

I have a project that uses the DeepStream framework, and the object detection network used is YOLOv7. The environment I configured is DeepStream 6.0, TensorRT 8.4.3, CUDA 11.4,a version of cuDNN compatible with CUDA 11.4 and Driver Version: 470.182.03.Every time we encounter a new scenario, we train a new set of weights for YOLOv7 specifically for that scenario. We train these weights using Darknet and then use an open-source code from GitHub to convert the weights to an engine. The GitHub repository for this conversion process is GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO models. We compile the dynamic library ‘libnvdsinfer_custom_impl_Yolo.so’ for this conversion on each new machine and place it in the corresponding folder of our project. We use the ‘config_infer_primary_yoloV7.txt’ file to specify the paths, which guides the engine generation process. This workflow works fine and generates valid engines for GPUs such as 1050, TITAN XP, 2060, 3060, 3080, 3090, and 4080, enabling successful object detection. However, we have encountered some issues when deploying on a cloud server with a Tesla T4 GPU. On this platform, we were able to generate an engine with FP32 precision, but we don’t need this precision, and testing showed that object detection does not work with it. Therefore, we attempted to generate an engine with FP16 precision, but encountered an error. After analyzing with GDB, we found the following error information.“etworkDefinition&, nvinfer1::IBuilderConfig&) ()
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
#23 0x00007fff822e0c7b in Yolo::createEngine(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*) ()
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
#24 0x00007fff822f4a56 in NvDsInferYoloCudaEngineGet ()
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
#25 0x00007fffce435562 in nvdsinfer::TrtModelBuilder::getCudaEngineFromCustomLib(bool ()(nvinfer1::IBuilder, _NvDsInferContextInitParams*, nvinfer1::DataType, nvinfer1::ICudaEngine*&), bool ()(nvinfer1::IBuilder, nvinfer1::IBuilderConfig*, _NvDsInferContextInitParams const*, nvinfer1::DataType, nvinfer1::ICudaEngine*&), _NvDsInferContextInitParams const&, NvDsInferNetworkMode&) ()
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#26 0x00007fffce4359b4 in nvdsinfer::TrtModelBuilder::buildModel(_NvDsInferContextInitParams const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >&) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#27 0x00007fffce3f55e4 in nvdsinfer::NvDsInferContextImpl::buildModel(_NvDsInferContextInitParams&) ()
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#28 0x00007fffce3f62a1 in nvdsinfer::NvDsInferContextImpl::generateBackendContext(_NvDsInferContextInitParams&) ()
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#29 0x00007fffce3f053b in nvdsinfer::NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void*, void ()(INvDsInferContext, unsigned int, NvDsInferLogLevel, char const*, void*)) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#30 0x00007fffce3f6ce9 in createNvDsInferContext(INvDsInferContext**, _NvDsInferContextInitParams&, void*, void ()(INvDsInferContext, unsigned int, NvDsInferLogLevel, char const*, void*)) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#31 0x00007fffd45677c1 in gst_nvinfer_start(_GstBaseTransform*) ()
from /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#32 0x00007fffe9bf6270 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstbase-1.0.so.0
#33 0x00007fffe9bf6505 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstbase-1.0.so.0
#34 0x00007ffff1e8c6ab in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#35 0x00007ffff1e8d126 in gst_pad_set_active () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#36 0x00007ffff1e6af0d in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#37 0x00007ffff1e7d884 in gst_iterator_fold () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#38 0x00007ffff1e6ba16 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#39 0x00007ffff1e6d95e in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#40 0x00007ffff1e6dc8f in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#41 0x00007ffff1e6fd5e in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#42 0x00007ffff1e70499 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#43 0x00007ffff1e4da02 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#44 0x00007ffff1e6fd5e in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#45 0x00007ffff1e70045 in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#46 0x00007ffff1e70499 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
#47 0x0000555555612e48 in StreamControl::init(std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, int, long, int, int) ()
#48 0x0000555555679f39 in SmartDeviceControl::init_deep_pipeline() ()
#49 0x000055555567d49e in SmartDeviceControl::init() ()
#50 0x000055555556e678 in main ()" We have discovered that the code for generating the engine is causing the program to crash. How can we resolve this issue

From DeepStream-Yolo/config_infer_primary_yoloV7.txt at master · marcoslucianops/DeepStream-Yolo (github.com), I don’t think the sample is for the weights file. Seems he is using ONNX model to generate engine file outside the DeepStream app. Please raise your problem to the original author of the project you referring to.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=/home/lzy/Deepstream_iTS-main/remote_gnr/model/66/yolov7.cfg
model-file=/home/lzy/Deepstream_iTS-main/remote_gnr/model/66/yolov7_last.weights
model-engine-file=/home/lzy/Deepstream_iTS-main/remote_gnr/model/66/model_b1_gpu0_fp16.engine
#int8-calib-file=calib.table
labelfile-path=/home/lzy/Deepstream_iTS-main/remote_gnr/model/labels.txt
batch-size=1
network-mode=2
num-detected-classes=9
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/home/lzy/Deepstream_iTS-main/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

This is the configuration file I am using.

The latest information is in the previous message,Can you help me solve the problem? Thank you!

Seems your implementation is different to the github project you post here. You used the library and functions provided by the project while you do not know which model it is for. Please raise your problem in the project and ask the original author of the github repo.

Another option is to refer to Nvidia sample yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream (github.com) for how to integrate YoloV7 with DeepStream.

thanks for your help

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.