Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) :- T4 • DeepStream Version :- 5.0 • JetPack Version (valid for Jetson only) - NA • TensorRT Version :- 7.0 • NVIDIA GPU Driver Version (valid for GPU only) :- 440.42
I am trying the deepstream-test5 app to try out the OTA functionality. Although it detects a change in the ota override file, it fails to update the model. I used the same model engine file stored in a different location for the model update.
HI,
I try to repro your issue by using original model, just add one blank line to the ota config file, configs/test5_ota_override_config.txt when test5 sample running, and model updated success, here is the log for your reference,
please try to run using original model to see if ota of model success.
**PERF: 13.38 (13.35) 13.38 (13.35) 13.38 (13.35) 13.38 (13.35)
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstFakeSink:sink_sub_bin_sink1:
There may be a timestamping problem, or this computer is too slow.
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstFakeSink:sink_sub_bin_sink1:
There may be a timestamping problem, or this computer is too slow.
File test5_ota_override_config.txt modified.
New Model Update Request primary_gie ----> /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/…/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_fp16.engine
Mon Sep 7 05:05:36 2020
**PERF: 13.27 (13.36) 13.27 (13.36) 13.27 (13.36) 13.27 (13.36)
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstFakeSink:sink_sub_bin_sink1:
There may be a timestamping problem, or this computer is too slow.
0:00:31.677156316 10761 0x7ee8158100 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/…/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:31.677341423 10761 0x7ee8158100 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/…/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_fp16.engine
0:00:31.938673615 10761 0x1ae978f0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/…/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_fp16.engine sucessfully
Model Update Status: Updated model : /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/…/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_fp16.engine, OTATime = 1925.762000 ms, result: ok