Nvinfer error:NVDSINFER_INVALID_PARAMS

• Hardware Platform: Jetson Xavier AGX
• DeepStream Version: 6.1
• TensorRT Version: 8.0.1-1+cuda11.4

I have this crash :
"
ARNING: [TRT]: Weights [name=/layer4/layer4.1/conv2/Conv + /layer4/layer4.1/Add + /layer4/layer4.1/relu_1/Relu.weight] had the following issues when converted to FP16:
WARNING: [TRT]: - Subnormal FP16 values detected.
WARNING: [TRT]: - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/my_rgb_resnet18_3_cyim_custom.etlt_b4_gpu0_fp16.engine opened error
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_rgb 3x3x224x224 min: 1x3x3x224x224 opt: 4x3x3x224x224 Max: 4x3x3x224x224
1 OUTPUT kFLOAT fc_pred 7 min: 0 opt: 0 Max: 0

sequence_image_process.cpp:494, [INFO: CUSTOM_LIB] 3D custom sequence network info(NCSHW), [N: 4, C: 3, S: 32, H: 224, W:224]
sequence_image_process.cpp:522, [INFO: CUSTOM_LIB] Sequence preprocess buffer manager initialized with stride: 1, subsample: 0
sequence_image_process.cpp:526, [INFO: CUSTOM_LIB] SequenceImagePreprocess initialized successfully
Warning: converting ROIs to RGBA for VIC mode
Using user provided processing height = 224 and processing width = 224
Decodebin child added: source
Decodebin child added: decodebin0
Running…
Decodebin child added: matroskademux0
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: opusdec0
Decodebin child added: nvv4l2decoder0
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Opening in BLOCKING MODE
In cb_newpad
In cb_newpad
FPS(cur/avg): 54.09 (54.09)
ERROR: Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 1x3x32x224x224 is not supported
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:49.676381721 105819 0xaaaafe080000 WARN nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input: error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:preprocess-test-pipeline/GstNvInfer:primary-nvinference-engine
WARNING: Backend context bufferIdx(0) request dims:1x3x32x224x224 is out of range, [min: 1x3x3x224x224, max: 4x3x3x224x224]
Returned, stopping playback
sequence_image_process.cpp:586, [INFO: CUSTOM_LIB] SequenceImagePreprocess is deinitializing
Deleting pipelineeee

"

Firstly, i have rebuild a new model from actionrecognitionnet.ipynb in change the labels
label_map:
drink: 0
eat: 1
shake_hands: 2
sit: 3
smoke: 4
stand: 5
talk: 6

The config file have been change for to build a new model for this new labels, deepstream_3d_action_recognition.cpp have been updated for this labels

after "tao action_recognition inference " I export new models as below

"
!tao action_recognition export
-e $SPECS_DIR/export_rgb.yaml
-k $KEY
model=$RESULTS_DIR/rgb_3d_ptm/rgb_only_model_custom.tlt output_file=$RESULTS_DIR/export/my_rgb_resnet18_3__custom.etlt
"

all work fine but i have this crash after to launch this pipeline deepstream-3d-action-recognition -c deepstream_action_recognition_config_custom.txt

Could you try to change the network-input-shape in the config files?

thanks a lot for your advice. There is no more core dump .
How to configure training to accept initial network-input-shape (4;3;32;224;224 ) ?

The pipeline seems not to have loaded my model, because this one shows me the values ​​of the initial model should it be unloaded?

num-sources = 1
Now playing: file:///opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/input_file.mkv,
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/./my_rgb_resnet18_3_cyim_custom_export.etlt_b4_gpu0_fp16.engine open error
0:00:00.282450099 506 0x563f2e2192f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/./my_rgb_resnet18_3_custom_export.etlt_b4_gpu0_fp16.engine failed
0:00:00.282488457 506 0x563f2e2192f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/./my_rgb_resnet18_3_cyim_custom_export.etlt_b4_gpu0_fp16.engine failed, try rebuild
0:00:00.282498450 506 0x563f2e2192f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:686 FP16 not supported by platform. Using FP32 mode.
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1224 FP16 not supported by platform. Using FP32 mode.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:03.236605109 506 0x563f2e2192f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/my_rgb_resnet18_3_custom_export.etlt_b4_gpu0_fp32.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_rgb 3x3x224x224 min: 1x3x3x224x224 opt: 4x3x3x224x224 Max: 4x3x3x224x224
1 OUTPUT kFLOAT fc_pred 7 min: 0 opt: 0 Max: 0

0:00:03.239329804 506 0x563f2e2192f0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
sequence_image_process.cpp:496, [INFO: CUSTOM_LIB] 3D custom sequence network info(NCSHW), [N: 4, C: 3, S: 3, H: 224, W:224]
sequence_image_process.cpp:524, [INFO: CUSTOM_LIB] Sequence preprocess buffer manager initialized with stride: 1, subsample: 0
sequence_image_process.cpp:526, [INFO: CUSTOM_LIB] SequenceImagePreprocess initialized successfully
Using user provided processing height = 224 and processing width = 224
Decodebin child added: source

i have launch a new training with a sequence lenght at 32, i hope to have a best result with gstreamer pipeline

The log didn’t show any error after rebuiding the model. Maybe you can refer the link below first:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_3D_Action.html?highlight=ncdhw

My new model seems to work fine. However, when I test on my xavier agx, I don’t get better performance than on my laptop GPU (Quadro P620). How to export a model optimized for a xavier agx? I can’t find the options to pass to “tao export”

Could you open a new topic about the performance problem and attach your comparison data? Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.