Following is the complete information as applicable to my setup.
• Hardware Platform (RTX 3060 GPU) • DeepStream Version 6.0 • TensorRT Version 8.5.3-1+cuda11.8 • • NVIDIA GPU Driver Version: 525 • Issue Type( questions, new requirements, bugs) • I simply trained the 3d action recognition model and exported the tlt and etlt files for my custom model. But after updating the configuration files, the custom model is not working and giving me error
Below are the configuration files and the error log.
Looking forward for your help.
Thanks
I have trained a custom 3D action recognition model and now want to deploy it on Deepstream but getting errors. Please help me…
The pretrained model is working perfectly in my system.
Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/rgb_3d_safety_scratch_250_7.etlt
as the log shown, the app failed to open model. could you share the result of “ll /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-3d-action-recognition/rgb_3d_safety_scratch_250_7.etlt”?
log.txt (802 Bytes)
after testing “./tao-converter -k nvidia_tao -t fp16 -b 4 -d 2,32,224,224 -e 1.engine rgb_3d_safety_scratch_v250_7.etlt”, tao-converter also failed to generate the engine.
please compare the md5 value with the origin etlt and make sure tlt-model-key is correct.
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct