Error details: gstnvdspreprocess.cpp(372): gst_nvdspreprocess_start (): /GstPipeline:preprocess-test-pipeline/GstNvDsPreProcess:preprocess-plugin:

• Hardware (NVIDIA GeForce RTX 3060]
• Network Type (ActionRecognition]
• Ubuntu 20.04 Deepstream-6.1

So I trained a custom 3d model of action recognition net but while deploying it in deepstream I’m facing this error -

divya@divya-GF65-Thin-10UE:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition$ ./deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt 
ERROR: Batch-size in network-input-shape should be atleast Sum Total of ROIs
num-sources = 2
Now playing: file:///home/divya/Downloads/ride.mp4, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov,
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine open error
0:00:00.760950828  6105 0x5652d74f6120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine failed
0:00:00.773897864  6105 0x5652d74f6120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine failed, try rebuild
0:00:00.773908617  6105 0x5652d74f6120 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:00:11.642285793  6105 0x5652d74f6120 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b1_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_rgb       3x3x224x224     
1   OUTPUT kFLOAT fc_pred         2               

0:00:11.659249247  6105 0x5652d74f6120 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
Running...
ERROR from element preprocess-plugin: Configuration file parsing failed
Error details: gstnvdspreprocess.cpp(372): gst_nvdspreprocess_start (): /GstPipeline:preprocess-test-pipeline/GstNvDsPreProcess:preprocess-plugin:
Config file path: config_preprocess_3d_custom.txt
Returned, stopping playback

this is the config_preprocess_3d_custom.txt file -

[property]
enable=1
target-unique-ids=1

# network-input-shape: batch, channel, sequence, height, width
# 3D sequence of 64 images
#network-input-shape= 4;3;32;224;224

# 3D sequence of 32 images
network-input-shape= 1;3;16;112;112

    # 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
    # 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=2
    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=0

processing-width=112
processing-height=112

    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE
    # 3=NVBUF_MEM_CUDA_UNIFIED  4=NVBUF_MEM_SURFACE_ARRAY(Jetson)
scaling-pool-memory-type=0

    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU
    # 2=NvBufSurfTransformCompute_VIC(Jetson)
scaling-pool-compute-hw=0

    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0

    # model input tensor pool size
tensor-buf-pool-size=8

custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_custom_sequence_preprocess.so
#custom-lib-path=./custom_sequence_preprocess/libnvds_custom_sequence_preprocess.so
custom-tensor-preparation-function=CustomSequenceTensorPreparation

# 3D conv custom params
[user-configs]
channel-scale-factors=0.007843137;0.007843137;0.007843137
channel-mean-offsets=110.79;103.3;96.26
stride=1
subsample=0

[group-0]
src-ids=0;1
process-on-roi=1
roi-params-src-0=0;0;1280;720
roi-params-src-1=0;0;1280;720
#roi-params-src-2=0;0;1280;720
#roi-params-src-3=0;0;1280;720


config_infer_primary_3d_action.txt -


[property]
gpu-id=0

tlt-encoded-model=/home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt
tlt-model-key=nvidia_tao
model-engine-file=./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine

labelfile-path=labels.txt
batch-size=1
process-mode=1

# requries preprocess metadata input
input-tensor-from-meta=1

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
gie-unique-id=1

# 1: classifier, 100: custom
network-type=1

# Let application to parse the inference tensor output
output-tensor-meta=1
tensor-meta-pool-size=8

Hi,
This looks more related to Deepstream. We are moving this post to the Deepstream forum to get better help.
Thank you.

Sure. Any help?

Did you use builtin 3D/2D model?
I see you changed
#network-input-shape= 4;3;32;224;224

#3D sequence of 32 images
network-input-shape= 1;3;16;112;112
From the log,
0 INPUT kFLOAT input_rgb 3x3x224x224
the model resolution is 224*224

Thanks for your reply @Amycao.
This is the custom 3D model with the help of TAO training. Firstly I was getting this error when I executed it in deepstream -

WARNING: nvdsinfer_backend.cpp:157 Backend context bufferIdx(0) request dims:1x96x224x224 is out of range, [min: 1x9x224x224, max: 1x9x224x224]
ERROR: nvdsinfer_backend.cpp:472 Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 1x96x224x224 is not supported
ERROR: nvdsinfer_context_impl.cpp:1711 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:18.131974696 4773 0x55baf1f3acc0 WARN nvinfer gstnvinfer.cpp:2009:gst_nvinfer_process_tensor_input: error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: gstnvinfer.cpp(2009): gst_nvinfer_process_tensor_input (): /GstPipeline:preprocess-test-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
sequence_image_process.cpp:586, [INFO: CUSTOM_LIB] SequenceImagePreprocess is deinitializing
Deleting pipeline

Then I found some help via this forum - Issue with Deepstream Inference of custom 3D action recognition model
and that’s why made the changes in the network-input-shape.
And now the error is related to Configuration file parsing failed

Sharing my logs -

num-sources = 2
Now playing: file:///home/divya/Downloads/ride.mp4, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov,
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine open error
0:00:00.913591142 10302 0x55556777ab20 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine failed
0:00:00.928120897 10302 0x55556777ab20 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-3d-action-recognition/./resnet18_3d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine failed, try rebuild
0:00:00.928137094 10302 0x55556777ab20 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:00:12.639264889 10302 0x55556777ab20 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b2_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT input_rgb       3x3x224x224     min: 1x3x3x224x224   opt: 2x3x3x224x224   Max: 2x3x3x224x224   
1   OUTPUT kFLOAT fc_pred         2               min: 0               opt: 0               Max: 0               

0:00:12.657245064 10302 0x55556777ab20 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
sequence_image_process.cpp:494, [INFO: CUSTOM_LIB] 3D custom sequence network info(NCSHW), [N: 4, C: 3, S: 32, H: 224, W:224]
sequence_image_process.cpp:522, [INFO: CUSTOM_LIB] Sequence preprocess buffer manager initialized with stride: 1, subsample: 0
sequence_image_process.cpp:526, [INFO: CUSTOM_LIB] SequenceImagePreprocess initialized successfully
Using user provided processing height = 224 and processing width = 224
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: source
Decodebin child added: decodebin1
Running...
Decodebin child added: qtdemux0
Decodebin child added: qtdemux1
Decodebin child added: multiqueue0
Decodebin child added: multiqueue1
Decodebin child added: h264parse0
Decodebin child added: h264parse1
Decodebin child added: capsfilter1
Decodebin child added: capsfilter0
Decodebin child added: aacparse0
Decodebin child added: aacparse1
Decodebin child added: avdec_aac0
Decodebin child added: avdec_aac1
Decodebin child added: nvv4l2decoder0
Decodebin child added: nvv4l2decoder1
In cb_newpad
In cb_newpad
In cb_newpad
In cb_newpad
0:00:13.618222376 10302 0x55556593d4c0 WARN                 nvinfer gstnvinfer.cpp:1915:gst_nvinfer_process_tensor_input:<primary-nvinference-engine> warning: nvinfer could not find input layer with name = 0

WARNING from element primary-nvinference-engine: nvinfer could not find input layer with name = 0

Warning: nvinfer could not find input layer with name = 0

0:00:13.652500218 10302 0x55556593d4c0 WARN                 nvinfer gstnvinfer.cpp:1915:gst_nvinfer_process_tensor_input:<primary-nvinference-engine> warning: nvinfer could not find input layer with name = 0

WARNING from element primary-nvinference-engine: nvinfer could not find input layer with name = 0

Warning: nvinfer could not find input layer with name = 0

0:00:13.685643092 10302 0x55556593d4c0 WARN                 nvinfer gstnvinfer.cpp:1915:gst_nvinfer_process_tensor_input:<primary-nvinference-engine> warning: nvinfer could not find input layer with name = 0

WARNING from element primary-nvinference-engine: nvinfer could not find input layer with name = 0

Warning: nvinfer could not find input layer with name = 0

These Warning lines are repeated until the video ends -

WARNING from element primary-nvinference-engine: nvinfer could not find input layer with name = 0

Warning: nvinfer could not find input layer with name = 0

0:00:20.915533028 10399 0x558304bc34c0 WARN                 nvinfer gstnvinfer.cpp:1915:gst_nvinfer_process_tensor_input:<primary-nvinference-engine> warning: nvinfer could not find input layer with name = 0

WARNING from element primary-nvinference-engine: nvinfer could not find input layer with name = 0

Warning: nvinfer could not find input layer with name = 0

Got EOS from stream 0
End of stream
Returned, stopping playback
sequence_image_process.cpp:586, [INFO: CUSTOM_LIB] SequenceImagePreprocess is deinitializing
Deleting pipeline

Q1 - What is meant by this warning ?
The tiler opens but the inference is not running on the input sources.
I have made some changes in the config_preprocess_3d_custom.txt and config_infer_primary_3d_action.txt-

As mentioned in the article - https://developer.nvidia.com/blog/developing-and-deploying-your-custom-action-recognition-application-without-any-ai-expertise-using-tao-and-deepstream/
Line 13 defines the 5-dimension input shape required by the 3D model:

network-input-shape = 4;3;32;224;224

For this application, you are using four inputs each with one ROI:

Your batch number is 4 (# of inputs *  # of ROIs per input).
Your input is RGB so the number of channels is 3.
The sequence length is 32 and the input resolution is 224×224 (HxW).

edited - config_preprocess_3d_custom.txt -

# network-input-shape: batch, channel, sequence, height, width
# 3D sequence of 64 images
network-input-shape= 4;3;32;224;224

The batch number is 4 because I have 2 input sources and 2 ROI’s hence (2 *2 )
channel - 3
sequence length is 32 for 64 images -
Q2- Will you explain what does sequence exactly means here?
Height and width are defined as 224


edited - config_infer_primary_3d_action.txt
batch-size= 2 (because of number of input sources )

WARNING from element primary-nvinference-engine: nvinfer could not find input layer with name = 0

How about this property you set?
tensor-name=

You can take a look this article about sequeue models

Thanks for your reply.
Have set this property as

tensor-name =0

hi @divyanka.thakur16
According to the log, the model input and output are as shown below

0   INPUT  kFLOAT input_rgb       3x3x224x224     min: 1x3x3x224x224   opt: 2x3x3x224x224   Max: 2x3x3x224x224   
1   OUTPUT kFLOAT fc_pred         2               min: 0               opt: 0               Max: 0  

hence here the input tensor-name=input_rgb and the sequence number are a batch of continuous images required by the model to predict the action. This is different from the batch number which you can set according to the number of sources.

So in this case the model is trained on taking 3 sequences of 224x224 images at once to predict 2 classes of actions.
hence set the network-input-shape=1x3x3x224x224 which indicates batch_size X channels X sequence X width X height`.

Try these and check

hi @rajatmr619, thanks for your reply.
I made the changes -

network-input-shape= 2x3x3x224x224

The output -

$ ./deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt 
0:00:00.043797307  4608 0x55597a623c30 ERROR         nvdspreprocess nvdspreprocess_property_parser.cpp:275:nvdspreprocess_parse_property_group: Failed to parse config file config_preprocess_3d_custom.txt: Error while setting property, in group property Value “2x3x3x224x224” cannot be interpreted as a number.
NVDSPREPROCESS_CFG_PARSER: Group 'property' parse failed
num-sources = 2
Now playing: file:///home/divya/Downloads/ride.mp4, file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov,
0:00:01.496682802  4608 0x55597a623c30 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b1_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_rgb       3x3x224x224     
1   OUTPUT kFLOAT fc_pred         2               

0:00:01.509079381  4608 0x55597a623c30 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:01.509191376  4608 0x55597a623c30 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b1_gpu0_fp16.engine failed to match config params, trying rebuild
0:00:01.514054028  4608 0x55597a623c30 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:00:12.127033261  4608 0x55597a623c30 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b2_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT input_rgb       3x3x224x224     min: 1x3x3x224x224   opt: 2x3x3x224x224   Max: 2x3x3x224x224   
1   OUTPUT kFLOAT fc_pred         2               min: 0               opt: 0               Max: 0               

0:00:12.143642821  4608 0x55597a623c30 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
Running...
ERROR from element preprocess-plugin: Configuration file parsing failed
Error details: gstnvdspreprocess.cpp(372): gst_nvdspreprocess_start (): /GstPipeline:preprocess-test-pipeline/GstNvDsPreProcess:preprocess-plugin:
Config file path: config_preprocess_3d_custom.txt
Returned, stopping playback
Deleting pipeline

hi @divyanka.thakur16 , I think that solved the tensor-name error, can you share all the 3 config files which you used to run this and got this log.
meanwhile try with this preprocess config file with batch size =1 send only one stream and check.
config_preprocess_3d_custom.txt (1.7 KB)

Thanks

Thanks for your reply.
I tried executing this with one stream and it’s the same error

./deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt 
0:00:00.381007979  4172 0x55ed9c818d50 ERROR         nvdspreprocess nvdspreprocess_property_parser.cpp:275:nvdspreprocess_parse_property_group: Failed to parse config file config_preprocess_3d_custom.txt: Error while setting property, in group property Value “1x3x3x224x224” cannot be interpreted as a number.
NVDSPREPROCESS_CFG_PARSER: Group 'property' parse failed
num-sources = 1
Now playing: file:///home/divya/Downloads/bike1.mp4,
0:00:02.956866902  4172 0x55ed9c818d50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b1_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_rgb       3x3x224x224     
1   OUTPUT kFLOAT fc_pred         2               

0:00:02.972847044  4172 0x55ed9c818d50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:02.972874620  4172 0x55ed9c818d50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b1_gpu0_fp16.engine failed to match config params, trying rebuild
0:00:02.977844370  4172 0x55ed9c818d50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:00:14.800235624  4172 0x55ed9c818d50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /home/divya/computer_vision/cv_samples_v1.4.0/action_recognition_net/results/export/rgb_resnet18_3.etlt_b2_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT input_rgb       3x3x224x224     min: 1x3x3x224x224   opt: 2x3x3x224x224   Max: 2x3x3x224x224   
1   OUTPUT kFLOAT fc_pred         2               min: 0               opt: 0               Max: 0               

0:00:14.816235268  4172 0x55ed9c818d50 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
Running...
ERROR from element preprocess-plugin: Configuration file parsing failed
Error details: gstnvdspreprocess.cpp(372): gst_nvdspreprocess_start (): /GstPipeline:preprocess-test-pipeline/GstNvDsPreProcess:preprocess-plugin:
Config file path: config_preprocess_3d_custom.txt
Returned, stopping playback
Deleting pipeline

Attaching the config files -
deepstream_action_recognition_config.txt (2.6 KB)
config_infer_primary_3d_action.txt (3.3 KB)
config_preprocess_3d_custom.txt (3.0 KB)

hi @divyanka.thakur16 ,
I have made some minor changes in the config file use these three config files and check once again.
if still, you get any errors then if possible do share the model file so that I can check at my end.
in my last comment I missed out on the correct method of setting the network-input-shape it should have been network-input-shape=1;3;3;224;224 I have made those changes here in these config file.
config_infer_primary_3d_action.txt (3.3 KB)
config_preprocess_3d_custom.txt (2.9 KB)
deepstream_action_recognition_config.txt (2.6 KB)

Thanks for your reply. I made the given changes and it’s the same error.
Would you explain that why did you made these changes?

display-sync=0 
debug=2

Sharing the Google drive link of the model file -
https://drive.google.com/drive/folders/1Vb6oFd_Bwkrdu7dvJDYAKBlOdo-Gm_M6?usp=sharing

hi @divyanka.thakur16 , The model is running successfully on my system. Make sure you have generated the libnvds_custom_sequence_preprocess.so inside the custom_sequence_preprocess folder by running the Makefile using sudo CUDA_VER=your cuda version make for example
sudo CUDA_VER=11.4 make.

The path of that folder would be at /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/custom_sequence_preprocess

I have not made any changes to the config files it’s the same one I had shared in the last post.

As you asked the display-sync is used to sync the source and sink frames, to avoid any frame drops at the sink I have set it to 0.
debug= 2 is used to log everything on the terminal for more information when we want to debug the problem. (It just indicates a level of logging.)

Try this and check the pipeline.
Thank you

Thanks for your help @rajatmr619
The code runs now

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.