Warning: Deserialize engine failed because file path

• Hardware Platform (Jetson ORIN AGX Developer Kit 64GB)
• DeepStream Version: 6.3.0
• JetPack Version: 5.1.2-b104
• TensorRT Version: 8.5
• Issue Type(questions) Can underlying issues generating the warning errors be fixed?
• Issue reproduction
a. compiled deepstream-test1-app.c using Makefile - successful
b. Executed program - successful with multiple warning messages
1. WARNING: Deserialize engine failed because file path:
2. 0:00:03.317104692 17194 0xaaab01f9fad0 WARN
3. 0:00:03.497392510 17194 0xaaab01f9fad0 WARN
4. 0:00:03.497479006 17194 0xaaab01f9fad0 INFO
5. WARNING: [TRT]: The implicit batch dimension mode has been deprecated.
6. WARNING: [TRT]: Unknown embedded device detected.

/opt/nvidia/…/…/…/deepstream-test1$ ./deepstream-test1-app dstest1_config.yml

OUTPUT to terminal:

Added elements to bin
Using file: dstest1_config.yml
Opening in BLOCKING MODE 
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.317104692 17194 0xaaab01f9fad0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.497392510 17194 0xaaab01f9fad0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.497479006 17194 0xaaab01f9fad0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: [TRT]: Unknown embedded device detected. Using 59660MiB as the allocation cap for memory on embedded devices.
     .
     .
End of stream
Returned, stopping playback
Deleting pipeline

Does your account have write permission for folder /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/? When running deepstream_test1 for the first time, it will create an engine file in this directory. So there will be an access issue if your default account has no write permission.
You can also add “sudo” and try again.

Another opition:

sudo chmod -R /opt/nvidia/deepstream/deepstream

The *.engine is converted by DeepStream.

  1. modified Primary_Detector directory to rwx rwx rwx root
    ran app with my home account “steven”
    Issue is resolved
    Do you think I missed a step in the installation instructions or didn’t follow the readme instructions correctly?
    Did I mess up somewhere?

Don’t completely understand what you want me to do:

chmod: missing operand after ‘/opt/nvidia/deepstream/deepstream’
Try ‘chmod --help’ for more information.

Yingliu suggestion provided the expected output. However, would your solution, if working, been easier to
implement the fix?

The goal of both approches is to enable you to have permission to write in folder /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/. So either use sudo, or change you as owner of this folder, there should be your account name after option -R:
sudo chmod -R 777 /opt/nvidia/deepstream/deepstream

sorry, i miss some parameters. yingliu gives the complete command line

sudo chmod -R 777 /opt/nvidia/deepstream/deepstream

yingliu & junshengy - Thanks!!!