Using Custom action recognition Model in Deepstream 3D action recognition and Getting Error

Following is the complete information as applicable to my setup.

• Hardware Platform (RTX 3060 GPU)
• DeepStream Version 6.0
• TensorRT Version 8.5.3-1+cuda11.8 •
• NVIDIA GPU Driver Version: 525
• Issue Type( questions, new requirements, bugs)
• I simply trained the 3d action recognition model and exported the tlt and etlt files for my custom model. But after updating the configuration files, the custom model is not working and giving me error
Below are the configuration files and the error log.
Looking forward for your help.
Thanks

safety_labels.txt (95 Bytes)
error_log.txt (3.8 KB)
config_preprocess_safety_3d_custom.txt (3.0 KB)
deepstream_safety_action_recognition_config.txt (2.5 KB)

config_infer_primary_3d_safety_action.txt (3.3 KB)

I have trained a custom 3D action recognition model and now want to deploy it on Deepstream but getting errors. Please help me…
The pretrained model is working perfectly in my system.

CUDA Programming and Performance DeepStream SDK tensorrt ubuntu

Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/rgb_3d_safety_scratch_250_7.etlt
as the log shown, the app failed to open model. could you share the result of “ll /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/rgb_3d_safety_scratch_250_7.etlt”?

Thanks for your reply. I am not able to upload the etlt file here. I am sharing the link to google drive. Can you please take it from the drive?

Drive link: rgb_3d_safety_scratch_v250_7.etlt - Google Drive

Let me know if you want anything else… looking forward to it.

Best Regards,

testing the model, the app also failed to generate engine. here is the error log
log-1205.txt (2.4 KB)

  1. please make the etlt model is complete and tlt-model-key is correct.
  2. to narrow down this issue. please check if converting by TensorRT tool is fine
get --content-disposition 'https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v4.0.0_trt8.5.2.2_x86/files/tao-converter' -O tao-converter
chmod 755 tao-converter
./tao-converter -k nvidia_tao -t fp16   -p  input_of,1x2x32x224x224,4x2x32x224x224,4x2x32x224x224 -e 1.engine  rgb_3d_safety_scratch_v250_7.etlt

This is how I exported my custom etlt model

log.txt (802 Bytes)
after testing “./tao-converter -k nvidia_tao -t fp16 -b 4 -d 2,32,224,224 -e 1.engine rgb_3d_safety_scratch_v250_7.etlt”, tao-converter also failed to generate the engine.
please compare the md5 value with the origin etlt and make sure tlt-model-key is correct.

[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct

How to check the tlt-model-key is correct or not?

what is the value of that $KEY?

did you retrain model based on Action Recognition Net | NVIDIA NGC?

Yes I retrained the model based on the notebooks provided by NVIDIA TAO toolkit

Is there any other way to check whether the etlt file is correct or not? @fanzh

Please check the commands.
Refer to
ActionRecognitionNet - NVIDIA Docs and https://github.com/NVIDIA-AI-IOT/tao_toolkit_recipes/tree/main/tao_action_recognition/tensorrt_inference.

Getting

Moving to TAO forum.

How to track the question in TAO forum?

I already move this topic into TAO Toolkit forum. I am reproducing the error. Will let you know the result.

okay looking forward to it. Thanks