Now I need to implement the same function through C++, which can obtain RTSP data and decode it into the format required by the facial recognition algorithm; May I ask if there is a sample that can be used as a reference? I found a gstreamer sample that compiles normally, but there is no preview or data reading after running.
the pipeline of c++ programs:
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Stream format not found, dropping the frame
Stream format not found, dropping the frame
If there is a sample based on gstreamer to obtain RTSP streams and decode and convert formats, please send it to me, hoping that I can refer to it.
Best Regards
I checked the files in the /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector path and found that there is no resnet10.caffemodel-b30_gpu0uint8.engine
nvidia@ubuntu:/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector$ ls
cal_trt.bin labels.txt resnet10.caffemodel resnet10.prototxt
Hi,
Please run the command with sudo since the path needs root permission. Or please copy the whole deepstream folder to non-root-permission path(such as home directory)
Hi, @DaneLLL
I added sudo in front of the deepstream-app command when running it, but still got the same error. Then I copied the deepstream-6.3 directory to/home/Nvidia and executed it, but still got the same error
sudo deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
WARNING: Deserialize engine failed because file path: /home/nvidia/code/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:03.884848352 5694 0xaaab072ba860 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 6]: deserialize engine from file :/home/nvidia/code/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:04.110614272 5694 0xaaab072ba860 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 6]: deserialize backend context from engine from file :/home/nvidia/code/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:04.110748992 5694 0xaaab072ba860 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 6]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Hi,
We don’t observe the issue. The engine file should be generated in first run. A bit strange you cannot successfully run the default sample. Are you able to re-flash the system and try again?