• Jetson Orin Nano
• DS 7.0.0
• JP 6.0.0 v2
Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID =
1]: deserialize backend context from engine from file :/path/to/d0_avlp_544_960_f32.engine failed, try rebuild
Even though the .engine
file was built and saved to the path successfully.
What could be the troubleshooting steps? Thanks.
Hi,
nvinfer
saves the engine after generating it, but the default location is /opt/
, and the default user doesn’t have permission to write there. You can try running the application with sudo
or modify the permissions of the destination folder.
This may be caused by the mismatch between your engine file and the batch-size/network-mode
in the gie configuration file.
Refer this documentation Gst-nvinfer — DeepStream documentation 6.4 documentation
Thanks for your responses guys!
The folder that I am using has all the permissions (775).
The cfg is not changed between the runs. So the first run creates an .engine
file, and the second run without any modifications already must be rebuilt.
Anything else I can check?
The engine file created by deepstream is usually named xxxx._bx_gpux_int8.engine
. such as resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
Do you modify the batch-size
property of nvinfer in the program dynamiclly ? If the batch-size
not match with engine file, it will be rebuilt.
I keep it default, but does it matter?
No, they are set in the configs
This is true. If the model-engine-file
does not match the batch-size/network-mode/precision(int8/fp32..)
, it will be rebuilt every time it is run.
After the engine is saved, make sure to update the config file to match the saved engine’s name.
cihn
September 27, 2024, 2:53pm
11
I resolved this issue by running the following command:
sudo chown -R nvidia:nvidia /opt/nvidia/deepstream/deepstream
1 Like
system
Closed
October 15, 2024, 12:00pm
13
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.