Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only): 4.6
• TensorRT Version: 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only): NA
• Issue Type( questions, new requirements, bugs): Questions
Hello,
We are trying to use the Deepstream Test5 app to do OTA model updates. I’ve tested the example that was provided and it works perfectly. I then decided to test the example using our custom model which is YOLO_V4. However, when I run the example and use our model instead of the provided resnet10 caffe model, I get the error below. Here, the model does update, but then I get a segmentation fault and an error saying that the model failed to parse bounding boxes.
I saw the following post, and saw that the segmentation fault is caused by the custom bbox parser. I was wondering if that issue had been resolved yet.
Thank you in advance!
./deepstream-test5-app -c …/sources/apps/sample_apps/deepstream-test5/configs/test5_config_file_src_infer.txt -o …/sources/apps/sample_apps/deepstream-test5/configs/test5_ota_override_config.txt
REAL PATH = /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/test5_ota_override_config.txt
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:04.653483136 10534 0x78d2070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x736x1280
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200
0:00:04.653787552 10534 0x78d2070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine
0:00:04.695085248 10534 0x78d2070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/infer_config.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
** INFO: <bus_callback:194>: Pipeline ready
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running
**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg)
Mon May 2 17:27:15 2022
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
Mon May 2 17:27:20 2022
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
WARNING; playback mode used with URI [file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
WARNING; playback mode used with URI [file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
WARNING; playback mode used with URI [file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
WARNING; playback mode used with URI [file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
Mon May 2 17:27:25 2022
**PERF: 7.20 (7.12) 7.20 (7.12) 7.20 (7.12) 7.20 (7.12)
File test5_ota_override_config.txt modified.
New Model Update Request primary_gie ----> /opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine
0:00:16.550898944 10534 0x7ea41ca960 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:17.250502496 10534 0x7ea41ca960 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x736x1280
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200
0:00:17.250732832 10534 0x7ea41ca960 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine
0:00:17.607271104 10534 0x7194cf0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine sucessfully
Model Update Status: Updated model : /opt/nvidia/deepstream/deepstream-6.0/samples/models/model_test/release/model.etlt_b1_gpu0_int8.engine, OTATime = 1060.202000 ms, result: ok
0:00:17.665835872 10534 0x6eba6d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:17.665992064 10534 0x6eba6d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)
test5_config_file_src_infer.txt (6.3 KB)
test5_ota_override_config.txt (2.2 KB)