Hi!
I’m trying to deploy my model at a customer location, and DeepStream prints a lot of debug information like:
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x416x416 min: 1x3x608x608 opt: 1x3x608x608 Max: 1x3x608x608
1 OUTPUT kFLOAT boxes 10647x1x4 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT confs 10647x34 min: 0 opt: 0 Max: 0
0:00:11.666248029 15532 0x5584445870 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: model.engine
0:00:11.866642654 15532 0x5584445870 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-nvinference-engine> [UID 2]: Load new model:config.cfg sucessfully
gstnvtracker: Loading low-level lib at ../libs/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF] Initialized
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:16.259863294 15532 0x5584445870 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :model.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x608x608 min: 1x3x608x608 opt: 1x3x608x608 Max: 1x3x608x608
1 OUTPUT kFLOAT boxes 22743x1x4 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT confs 22743x1 min: 0 opt: 0 Max: 0
All these messages contain confidential information about the model parameters and so forth, I couldn’t find a way to STOP these debug messages from being printed on screen. Is there a way to do it? Thanks!
• Hardware Platform (Jetson / GPU): Jetson NX
• DeepStream Version: 5.0.1
• JetPack Version (valid for Jetson only): 4.4.1