Please provide complete information as applicable to your setup.
Please provide complete information as applicable to your setup.
• Jetson Nano B01
• DeepStream 5.1
• JetPack Version 4.5
• TensorRT Version 7.1.3
• Question type
• Requirement details: Reduce or remove the initialization time in deepstream samples
Hi,
I would like to know how to avoid the deepstream initialization when I run a deepstream sample (for example) using Python API.
I ran and it takes around of 3-5 minutes to start the video analysis.
During this initialization, it reports WARN(ing) and INFO messages where it says that INT8 is not supported by the hardware (Jetson Nano B) and convert to FP16.
I would like to know how to configurate previously to FP16 in order to avoid this initial minutes before video detection.
The reported messages are here:
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Using winsys: x11
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:04.674692393 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 4]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:04.674777655 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 4]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:04.674811093 30230 0x3c13cad0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
0:00:55.038285658 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 4]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1
0:00:55.213622738 30230 0x3c13cad0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary3-nvinference-engine> [UID 4]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_sgie3_config.txt sucessfully
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:55.222193094 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 3]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:55.222235074 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 3]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:55.222264293 30230 0x3c13cad0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 3]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
0:01:45.421112274 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 3]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
0:01:45.485252441 30230 0x3c13cad0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_sgie2_config.txt sucessfully
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:01:45.489505847 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 2]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:01:45.489571369 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 2]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:01:45.489607516 30230 0x3c13cad0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
0:02:28.062140995 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 2]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 12x1x1
0:02:28.119436929 30230 0x3c13cad0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:02:28.806579538 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:02:28.806630529 30230 0x3c13cad0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:02:28.806664904 30230 0x3c13cad0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
The main report message is:
WARNING: INT8 not supported by platform. Trying FP16 mode.
Should I modify the file resnet10.caffemodel_b1_gpu0_int8.engine
to:
resnet10.caffemodel_b1_gpu0_fp16.engine
?
Hi,
YES. Please update the engine file name.
Since Nano doesn’t support INT8, it will switch to fp16 and save the engine.
So you will need to update the file name correspondingly.
Thanks.
1 Like
Right, but what is the file name?
In dstest2_pgie_config.txt
I replaced:
- Df:
resnet10.caffemodel_b1_gpu0_int8.engine
- To:
resnet10.caffemodel_b1_gpu0_fp16.engine
In dstest2_sgie1_config.txt
I replaced:
- Df:
resnet18.caffemodel_b16_gpu0_int8.engine
- To:
resnet18.caffemodel_b16_gpu0_fp16.engine
In dstest2_sgie2_config.txt
I replaced:
- Df:
resnet18.caffemodel_b16_gpu0_int8.engine
- To:
resnet18.caffemodel_b16_gpu0_fp16.engine
In dstest2_sgie3_config.txt
I replaced:
- Df:
resnet18.caffemodel_b16_gpu0_int8.engine
- To:
resnet18.caffemodel_b16_gpu0_fp16.engine
The result is practically the same:
Using winsys: x11
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine open error
0:00:04.542755434 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 4]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine failed
0:00:04.542847519 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 4]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine failed, try rebuild
0:00:04.542882936 22494 0xde8a0d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
0:01:02.492265213 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary3-nvinference-engine> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 4]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1
0:01:02.823374975 22494 0xde8a0d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary3-nvinference-engine> [UID 4]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_sgie3_config.txt sucessfully
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine open error
0:01:02.824899480 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 3]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine failed
0:01:02.824943335 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 3]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine failed, try rebuild
0:01:02.824971981 22494 0xde8a0d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 3]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
0:01:54.168805869 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 3]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
0:01:54.321397037 22494 0xde8a0d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_sgie2_config.txt sucessfully
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine open error
0:01:54.331717423 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 2]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine failed
0:01:54.331801018 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 2]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine failed, try rebuild
0:01:54.331863050 22494 0xde8a0d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
0:02:38.157595810 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 2]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 12x1x1
0:02:38.219250361 22494 0xde8a0d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: Deserialize engine failed because file path: /home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_fp16.engine open error
0:02:39.584103333 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_fp16.engine failed
0:02:39.584155313 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/userX/workspace/project/subfolder/software/deepstream-test2/../../../../../../../opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_fp16.engine failed, try rebuild
0:02:39.584188282 22494 0xde8a0d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_fp16.engine opened error
0:03:05.242365406 22494 0xde8a0d0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x272x480
1 OUTPUT kFLOAT conv2d_bbox 16x17x30
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x17x30
0:03:05.355077763 22494 0xde8a0d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/userX/workspace/project/subfolder/software/deepstream-test2/dstest2_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
It seems that the engine file in FP16 is not being saved because is not detected and a error occurs
Hi,
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine opened error
It seems the deepstream try to serialize the engine but failed.
So you still need to re-convert the engine file in the next launch.
Guess the failure is caused by the write permission of the folder.
You may need to change the folder owner from root to your account to get the write permission.
Thanks.
Right,
I gave permissions to all folders in /opt/nvidia/deepstream/deepstream-5.1/samples/models/
and after a new launch, the FP16 engine files were successfully saved.
Thank you @AastaLLL!