root@46bb0e6d53e7:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5# ./deepstream-test5-app -c configs/test5_config_file_src_infer.txt -o configs/test5_ota_override_config.txt REAL PATH = /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/test5_ota_override_config.txt REAL PATH = /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/test5_ota_override_config.txt (deepstream-test5-app:887): GLib-CRITICAL **: 03:01:48.391: g_strchug: assertion 'string != NULL' failed (deepstream-test5-app:887): GLib-CRITICAL **: 03:01:48.391: g_strchomp: assertion 'string != NULL' failed length=64 watch decriptor = 1; mask = IN_OPEN name = dstest5_msgconv_sample_config.txt mask= 20 length=64 watch decriptor = 1; mask = IN_ACCESS name = dstest5_msgconv_sample_config.txt mask= 1 length=64 watch decriptor = 1; mask = IN_CLOSE_NOWRITE IN_CLOSE name = dstest5_msgconv_sample_config.txt mask= 10 gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so gstnvtracker: Optional NvMOT_RemoveStreams not implemented gstnvtracker: Batch processing is OFF gstnvtracker: Past frame output is OFF WARNING: [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles 0:00:02.896628828 887 0x555beed7c660 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine INFO: [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT input_1 3x368x640 1 OUTPUT kFLOAT conv2d_bbox 16x23x40 2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 0:00:02.896760555 887 0x555beed7c660 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() [UID = 1]: Backend has maxBatchSize 1 whereas 4 has been requested 0:00:02.896788903 887 0x555beed7c660 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed to match config params, trying rebuild 0:00:02.901382016 887 0x555beed7c660 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2 INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales. INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache. INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. INFO: [TRT]: Detected 1 inputs and 2 output network tensors. 0:00:13.050001050 887 0x555beed7c660 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine successfully WARNING: [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles INFO: [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT input_1 3x368x640 1 OUTPUT kFLOAT conv2d_bbox 16x23x40 2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 0:00:13.058219598 887 0x555beed7c660 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/configs/deepstream-app/config_infer_primary.txt sucessfully Model Update Status: Updated model : /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/configs/deepstream-app/config_infer_primary.txt, OTATime = 1605754921412.919922 ms, result: ok Runtime commands: h: Print this help q: Quit p: Pause r: Resume NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window. **PERF: FPS 0 (Avg) FPS 1 (Avg) Thu Nov 19 03:02:01 2020 **PERF: 0.00 (0.00) 0.00 (0.00) ** INFO: : Pipeline ready ** INFO: : Pipeline running KLT Tracker Init KLT Tracker Init WARNING; playback mode used with URI [file:///opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time WARNING; playback mode used with URI [file:///opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time Thu Nov 19 03:02:06 2020 **PERF: 45.60 (45.42) 45.60 (45.42) Thu Nov 19 03:02:11 2020 **PERF: 45.86 (45.51) 45.86 (45.51) Thu Nov 19 03:02:16 2020 **PERF: 45.21 (45.47) 45.21 (45.47) Thu Nov 19 03:02:21 2020 **PERF: 45.42 (45.45) 45.42 (45.45) Thu Nov 19 03:02:26 2020 **PERF: 45.25 (45.40) 45.25 (45.40) Thu Nov 19 03:02:31 2020 **PERF: 45.61 (45.46) 45.61 (45.46) ** INFO: : Received EOS. Exiting ... Quitting length=16 App run successful