I have completed 2d and 3d action recognition model training using TAO 3.0. Later tried to run the model on deepstream-6.0. I have edited the model path and lables.txt. When I tried to run the application, I’m encountered with this error-
divya@divya-GF65-Thin-10UE:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition$ ./deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt
num-sources = 2
Now playing: file:///home/divya/Downloads/ride.mp4, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov,
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_2d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine open error
0:00:00.761201560 4773 0x55baf3d75320 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_2d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine failed
0:00:00.775030852 4773 0x55baf3d75320 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_2d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine failed, try rebuild
0:00:00.775042336 4773 0x55baf3d75320 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:00:17.183637061 4773 0x55baf3d75320 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export_2d/rgb_resnet18_2.etlt_b1_gpu0_fp16.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_rgb 9x224x224
1 OUTPUT kFLOAT fc_pred 2
0:00:17.199869270 4773 0x55baf3d75320 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_2d_action.txt sucessfully
sequence_image_process.cpp:499, [INFO: CUSTOM_LIB] 2D custom sequence network shape NSHW[4, 96, 224, 224], reshaped as [N: 4, C: 3, S:32, H: 224, W:224]
sequence_image_process.cpp:522, [INFO: CUSTOM_LIB] Sequence preprocess buffer manager initialized with stride: 1, subsample: 0
sequence_image_process.cpp:526, [INFO: CUSTOM_LIB] SequenceImagePreprocess initialized successfully
Using user provided processing height = 224 and processing width = 224
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: source
Decodebin child added: decodebin1
Running…
Decodebin child added: qtdemux0
Decodebin child added: qtdemux1
Decodebin child added: multiqueue0
Decodebin child added: multiqueue1
Decodebin child added: h264parse0
Decodebin child added: h264parse1
Decodebin child added: capsfilter0
Decodebin child added: capsfilter1
Decodebin child added: aacparse0
Decodebin child added: aacparse1
Decodebin child added: avdec_aac0
Decodebin child added: avdec_aac1
Decodebin child added: nvv4l2decoder0
Decodebin child added: nvv4l2decoder1
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
WARNING: nvdsinfer_backend.cpp:157 Backend context bufferIdx(0) request dims:1x96x224x224 is out of range, [min: 1x9x224x224, max: 1x9x224x224]
ERROR: nvdsinfer_backend.cpp:472 Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 1x96x224x224 is not supported
ERROR: nvdsinfer_context_impl.cpp:1711 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:18.131974696 4773 0x55baf1f3acc0 WARN nvinfer gstnvinfer.cpp:2009:gst_nvinfer_process_tensor_input: error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: gstnvinfer.cpp(2009): gst_nvinfer_process_tensor_input (): /GstPipeline:preprocess-test-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
sequence_image_process.cpp:586, [INFO: CUSTOM_LIB] SequenceImagePreprocess is deinitializing
Deleting pipeline
Hi @Morganh,thanks for your reply.
Yes I have set the correct file - deepstream_action_recognition_config.txt and also have followed this - DeepStream 3D Action Recognition App — DeepStream 6.1.1 Release documentation
Have made the specific changes in the network-input-shape for the respective 2d and 3d config_infer_primary_2d_action.txt/config_infer_primary_3d_action.txt files and then have executed these commands
$ make
$ make install, but still the same error is shown if I try to run the 2d or the 3d model
Thanks for your reply @Morganh .
When I run deepstream-3d-action-recognition it shows "Load new model:config_infer_primary_2d_action.txt sucessfully” because when I run this command - ./deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt ,
deepstream_action_recognition_config.txt contains
Hi @Morganh, yes the tiler is displaying with no errors but the inference is not running.
For reference attaching an image.
Also will you explain why did we change from 96 to 9 ? Thanks
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks