No resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine generated in DS7.1

Right now I have found an issue with DS7.1 deepstream-test2.

It should have following three egine file:

  • samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
  • samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet_pruned.onnx_b16_gpu0_int8.engine
  • samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine

BUT, two of engine files are generated automaticly. There is no samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine file

So it take a long time to launch test2 code, with following warnings:

WARNING: Deserialize engine failed because file path: /home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine open error
0:00:00.221537306 26352 0xaaaadeb46160 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 3]: deserialize engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed
0:00:00.221582267 26352 0xaaaadeb46160 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 3]: deserialize backend context from engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed, try rebuild
0:00:00.221598171 26352 0xaaaadeb46160 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 3]: Trying to create engine from model files
WARNING: INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
^C^C^C0:02:18.111540483 26352 0xaaaadeb46160 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 3]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_fp16.engine successfully

And I found there are calibration files there:

$ ls samples/models/Secondary_VehicleTypes/
cal_trt.bin  labels.txt  resnet18_vehicletypenet_pruned.onnx  resnet18_vehicletypenet_pruned.onnx_b16_gpu0_fp16.engine
Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Jetson Orin Nano Developer Kit - Jetpack 6.2 [L4T 36.4.3]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0005
 - Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
 - Distribution: Ubuntu 22.04 Jammy Jellyfish
 - Release: 5.15.148-tegra
jtop:
 - Version: 4.3.1
 - Service: Active
Libraries:
 - CUDA: 12.6.68
 - cuDNN: 9.3.0.75
 - TensorRT: 10.3.0.30
 - VPI: 3.2.4
 - OpenCV: 4.11.0 - with CUDA: YES
DeepStream C/C++ SDK version: 7.1

Python Environment:
Python 3.10.12
    GStreamer:                   YES (1.20.3)
  NVIDIA CUDA:                   YES (ver 12.6, CUFFT CUBLAS FAST_MATH)
         OpenCV version: 4.11.0  CUDA True
           YOLO version: 8.3.68
         PYCUDA version: 2024.1.2
          Torch version: 2.5.1+l4t36.4
    Torchvision version: 0.20.0
 DeepStream SDK version: 1.2.0
onnxruntime     version: 1.20.1
onnxruntime-gpu version: 1.19.2

FPV Environment:
MSPOSD version: c28d645 20250205_151537

Could you attach your command? I have tried on my board, the engine file can be generated normally.

./deepstream-test2-app dstest2_config.yml

ls /opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/
cal_trt.bin  resnet18_vehicletypenet_pruned.onnx
labels.txt   resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine

see below:

git clone https://github.com/SnapDragonfly/jetson-fpv.git
cd jetson-fpv/utils/deepstream
ln -sf /opt/nvidia/deepstream/deepstream/samples/ samples
cd ../../
python3 ./utils/deepstream/deepstream_NvDCF.py -i file:///home/daniel/Work/jetson-fpv/utils/deepstream/samples/streams/sample_1080p_h264.mp4

It’s using dstest2_* configuration filese, which only adjust file path only. It should work as expected, except those warnings.

daniel@daniel-nvidia:~/Work/jetson-fpv$ python3 ./utils/deepstream/deepstream_NvDCF.py -i file:///home/daniel/Work/jetson-fpv/utils/deepstream/samples/streams/sample_1080p_h264.mp4
Current working directory: /home/daniel/Work/jetson-fpv
New working directory: /home/daniel/Work/jetson-fpv/utils/deepstream
{'input': ['file:///home/daniel/Work/jetson-fpv/utils/deepstream/samples/streams/sample_1080p_h264.mp4'], 'input_codec': 'h264', 'no_display': False, 'file_loop': False, 'silent': False}
Creating Pipeline

Creating streamux

Creating source_bin  0

Creating source bin
source-bin-00
Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Is it Integrated GPU? : 1
Creating nv3dsink

Adding elements to Pipeline

Now playing...
0 :  file:///home/daniel/Work/jetson-fpv/utils/deepstream/samples/streams/sample_1080p_h264.mp4
Starting pipeline

Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine open error
0:00:00.214201723 29011 0xaaaada4bc440 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 3]: deserialize engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed
0:00:00.214239611 29011 0xaaaada4bc440 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 3]: deserialize backend context from engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed, try rebuild
0:00:00.214256634 29011 0xaaaada4bc440 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 3]: Trying to create engine from model files
WARNING: INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU

It seems that you are using different version from mine DS7.1 on Jetpack 6.2 L4T36.4.3

daniel@daniel-nvidia:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2$ deepstream-app dstest2_config.** ERROR: <main:655>: Specify config file with -c option
Quitting
App run failed

Update: @yuweiw We are using python demos from here GitHub - NVIDIA-AI-IOT/deepstream_python_apps at cb7fd9c8aa012178527e0cb84f91d1f5a0ad37ff

@yuweiw Sorry, this is my mistake.

I have found the issue here: the path of calib file is wrong .sample should be sample

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.