Im using custom model RF DETR for detecting the objects, i want to inference that model in deepstream 6.4 but got error. I want the code for custom training the model and setup of those files.
The error and config file as follows:
WARNING: [TRT]: Running layernorm after self-attention in FP16 may cause overflow. Exporting the model to the latest available ONNX opset (later than opset 17) to use the INormalizationLayer, or forcing layernorm layers to run in FP32 precision can help with preserving accuracy.
WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 163 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [TRT]: - 42 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
Building complete
0:04:04.103782747 353770 0xaaab26bc5cd0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 1]: serialize cuda engine to file: /nvme0n1/deepstream_folder/deepstream_task_yolo12/model_b2_gpu0_fp16.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x560x560
1 OUTPUT kFLOAT dets 300x4
2 OUTPUT kFLOAT labels 300x2
0:04:04.454497089 353770 0xaaab26bc5cd0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2024> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:04:04.454534753 353770 0xaaab26bc5cd0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2156> [UID = 1]: deserialized backend context :/nvme0n1/deepstream_folder/deepstream_task_yolo12/model_b2_gpu0_fp16.engine failed to match config params
0:04:05.059453025 353770 0xaaab26bc5cd0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:04:05.059541762 353770 0xaaab26bc5cd0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:04:05.059635139 353770 0xaaab26bc5cd0 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:04:05.059655363 353770 0xaaab26bc5cd0 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Config file path: config_infer_primary_yolo12.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: config_infer_primary_yolo12.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
my config file is
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=best.onnx
model-engine-file=model_b1_gpu0_fp16.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet