When converting the YOLOv8 ONNX model to the DeepStream engine, there is a loss of accuracy, especially for larger objects. However, when converting the model outside of DeepStream, from ONNX to Engine, accuracy is not affected. The same problem happens for yolov4 trained on tao
Warning logs when a conversion done
o from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. SeeCUDA_MODULE_LOADING
in 1. Introduction — CUDA C Programming Guide
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.Building the TensorRT Engine
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::160] Error Code 2: OutOfMemory (no further information)
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::145] Error Code 2: OutOfMemory (no further information)
WARNING: [TRT]: Requested amount of GPU memory (17179869184 bytes) could not be allocated. There may not be enough free memory for allocation to succeed.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 17179869184 detected for tactic 0x0000000000000004.
Try decreasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::160] Error Code 2: OutOfMemory (no further information)
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::145] Error Code 2: OutOfMemory (no further information)
WARNING: [TRT]: Requested amount of GPU memory (17179869184 bytes) could not be allocated. There may not be enough free memory for allocation to succeed.
WARNING: [TRT]: Skipping tactic 8 due to insufficient memory on requested size of 17179869184 detected for tactic 0x000000000000003c.
Try decreasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::160] Error Code 2: OutOfMemory (no further information)
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::145] Error Code 2: OutOfMemory (no further information)
WARNING: [TRT]: Requested amount of GPU memory (17179869184 bytes) could not be allocated. There may not be enough free memory for allocation to succeed.
WARNING: [TRT]: Skipping tactic 3 due to insufficient memory on requested size of 17179869184 detected for tactic 0x0000000000000004.
Try decreasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::160] Error Code 2: OutOfMemory (no further information)
ERROR: [TRT]: 2: [virtualMemoryBuffer.cpp::resizePhysical::145] Error Code 2: OutOfMemory (no further information)
WARNING: [TRT]: Requested amount of GPU memory (17179869184 bytes) could not be allocated. There may not be enough free memory for allocation to succeed.
WARNING: [TRT]: Skipping tactic 8 due to insufficient memory on requested size of 17179869184 detected for tactic 0x000000000000003c.
Try decreasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
Building complete
yolov8 model inference configuration file
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
num-detected-classes=1
model-color-format=0
infer-dims=3;640;640
process-mode=2
onnx-file=/app/models/yolov8/yolov8m_best.onnx
model-engine-file=/app/models/yolov8/yolo_best.onnx_b1_gpu0_fp32.engine
labelfile-path=/app/models/yolov8/labels.txt
#int8-calib-file=calib.table
batch-size=1
network-mode=0
num-detected-classes=15
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
engine-create-func-name=NvDsInferYoloCudaEngineGet
output-blob-names=BatcheYolo
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/app/config/yolov8/libnvdsinfer_custom_impl_Yolo.so
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
Hardware Platform (Jetson / GPU)? GPU
DeepStream Version? 6.3
TensorRT Version? 8.5.1
NVIDIA GPU Driver Version (valid for GPU only)? 510.73.08
Issue Type? Bug