Please provide the following information when requesting support.
• Hardware (RTX A6000)
• Network Type (actionrecognitionnet)
• Training spec file(train_rgb_3d_finetune.yaml (761 Bytes))
• How to reproduce the issue ? (deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt)
I converted the exported ONNX file to an engine file with tensorrt and put it in the folder(/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-action-recognition), I also modified the model file path in the configuration file. However, the program cannot be executed normally. How to solve this problem?
Moving to Deepstream forum.
Where and with which TensorRT+CUDA versions? From the log, your model path is wrong. DeepStream can generate the engine file automatically, you don’t need to do this.
TensorRT:/usr/src/tensorrt
CUDA:/usr/local/cuda-11.8
CUDA Runtime Version: 11.8
TensorRT Version: 8.5
DeepStream works fine with the default engine file. I just want to try exported model file through the TAO Toolkit, and the export file format is ONNX.
I set the configuration file according to the above reference, the program started successfully, but the program ended with an error soon. Need to add new configuration parameters?
Can you post the full log?
yeah,
lab@lab:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-action-recognition$ deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt
num-sources = 4
Now playing: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_run.mov, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_push.mov,
0:00:00.504928510 12834 0x560e7f121c60 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 21 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [TRT]: - 20 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
0:00:15.996320601 12834 0x560e7f121c60 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-3d-action-recognition/rgb_resnet18_3.onnx_b4_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_rgb 3x3x224x224 min: 1x3x3x224x224 opt: 4x3x3x224x224 Max: 4x3x3x224x224
1 OUTPUT kFLOAT fc_pred 2 min: 0 opt: 0 Max: 0
0:00:16.069811865 12834 0x560e7f121c60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
sequence_image_process.cpp:494, [INFO: CUSTOM_LIB] 3D custom sequence network info(NCSHW), [N: 4, C: 3, S: 32, H: 224, W:224]
sequence_image_process.cpp:522, [INFO: CUSTOM_LIB] Sequence preprocess buffer manager initialized with stride: 1, subsample: 0
sequence_image_process.cpp:526, [INFO: CUSTOM_LIB] SequenceImagePreprocess initialized successfully
Using user provided processing height = 224 and processing width = 224
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: source
Decodebin child added: decodebin1
Decodebin child added: source
Decodebin child added: decodebin2
Decodebin child added: source
Decodebin child added: decodebin3
Running...
Decodebin child added: qtdemux1
Decodebin child added: qtdemux0
Decodebin child added: qtdemux2
Decodebin child added: qtdemux3
Decodebin child added: multiqueue0
Decodebin child added: multiqueue1
Decodebin child added: multiqueue2
Decodebin child added: multiqueue3
Decodebin child added: h264parse0
Decodebin child added: h264parse1
Decodebin child added: h264parse3
Decodebin child added: h264parse2
Decodebin child added: capsfilter0
Decodebin child added: capsfilter1
Decodebin child added: capsfilter2
Decodebin child added: capsfilter3
Decodebin child added: aacparse0
Decodebin child added: aacparse2
Decodebin child added: aacparse1
Decodebin child added: aacparse3
Decodebin child added: avdec_aac0
Decodebin child added: avdec_aac1
Decodebin child added: avdec_aac2
Decodebin child added: avdec_aac3
Decodebin child added: nvv4l2decoder0
Decodebin child added: nvv4l2decoder1
Decodebin child added: nvv4l2decoder2
Decodebin child added: nvv4l2decoder3
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
FPS(cur/avg): 34.70 (34.70) 34.70 (34.70) 34.70 (34.70) 34.70 (34.70)
WARNING: nvdsinfer_backend.cpp:157 Backend context bufferIdx(0) request dims:4x3x32x224x224 is out of range, [min: 1x3x3x224x224, max: 4x3x3x224x224]
ERROR: nvdsinfer_backend.cpp:472 Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 4x3x32x224x224 is not supported
ERROR: nvdsinfer_context_impl.cpp:1720 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:17.288070911 12834 0x560e7dcb2aa0 WARN nvinfer gstnvinfer.cpp:2070:gst_nvinfer_process_tensor_input:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: gstnvinfer.cpp(2070): gst_nvinfer_process_tensor_input (): /GstPipeline:preprocess-test-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
sequence_image_process.cpp:586, [INFO: CUSTOM_LIB] SequenceImagePreprocess is deinitializing
Your model dimension is 3x3x224x224 while our sample is for the model with 3x32x224x224 dimension. Please modify the nvpreprocess library to generate the tensor data for your own model.
system
Closed
September 18, 2023, 7:53am
13
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.