• Hardware Platform Jetson • DeepStream Version 6.4
In the tracking configuration file, I can’t find the file[modelEngineFile: “/opt/nvidia/deepstream/deepstream/samples/models/Tracker/resnet50_market1501.etlt_b100_gpu0_fp16.engine”].
May I ask how to obtain this file? Where can I find relevant documents for learning. Thank u!!!
ReID:
reidType: 1 # The type of reid among { DUMMY=0, DEEP=1 }
# [Reid Network Info]
batchSize: 100 # Batch size of reid network
workspaceSize: 1000 # Workspace size to be used by reid engine, in MB
reidFeatureSize: 256 # Size of reid feature
reidHistorySize: 100 # Max number of reid features kept for one object
inferDims: [3, 256, 128] # Reid network input dimension CHW or HWC based on inputOrder
networkMode: 1 # Reid network inference precision mode among {fp32=0, fp16=1, int8=2 }
# [Input Preprocessing]
inputOrder: 0 # Reid network input order among { NCHW=0, NHWC=1 }. Batch will be converted to the specified order before reid input.
colorFormat: 0 # Reid network input color format among {RGB=0, BGR=1 }. Batch will be converted to the specified color before reid input.
offsets: [123.6750, 116.2800, 103.5300] # Array of values to be subtracted from each input channel, with length equal to number of channels
netScaleFactor: 0.01735207 # Scaling factor for reid network input after substracting offsets
keepAspc: 1 # Whether to keep aspc ratio when resizing input objects for reid
# [Output Postprocessing]
addFeatureNormalization: 1 # If reid feature is not normalized in network, adding normalization on output so each reid feature has l2 norm equal to 1
# [Paths and Names]
tltEncodedModel: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/resnet50_market1501.etlt" # NVIDIA TAO model path
tltModelKey: "nvidia_tao" # NVIDIA TAO model key
modelEngineFile: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/resnet50_market1501.etlt_b100_gpu0_fp16.engine" # Engine file path
In the SDK 6.4 documentation, there is only a download description for the resnet50_market1501Etlt file, and there is no download description for the resnet50_market1501Etltb100_gpu0_fp16.engine file
not sure about tlt, but if you provide an onnx file or uff file, it searches engine file first, if not exist, then use onnx file or uff, and generates an engine file. Next time stilll searches your engine file, and it has been generated in your first run.
Assuming tlt model is the same, you can have a try.
When I don’t have the. engine file, the log shows that it will generate itself, but there are three warnings. It will take a long time for the program to run, and when I close this program and run it again, I can’t find the. engine file. It will take a long time to generate before it can run. After the program is running, I cannot find the file “/opt/nvidia/deepstream/deepstream/samples/models/Tracker/resnet50_market1501. etlst-b100_gpu0_fp16. engine”.
May I ask what the problem is? The following are the corresponding program logs. After the warning in the last three lines appeared, it stuck for a long time (10 minutes)
start
source location [rtsp://admin:admin@192.168.31.139:8554/live]
Opening in BLOCKING MODE
0:00:06.531981502 51441 0x4f40010 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1
0:00:06.942829591 51441 0x4f40010 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
0:00:06.958515594 51441 0x4f40010 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:/home/mk/goproj/deepstream/dstest2_sgie2_config.yml sucessfully
0:00:13.087983963 51441 0x4f40010 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
0:00:13.518300288 51441 0x4f40010 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
0:00:13.545992963 51441 0x4f40010 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:/home/mk/goproj/deepstream/dstest2_sgie1_config.yml sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/src/modules/ReID/ReID.cpp, loadTRTEngine() @line 598]: Engine file does not exist
[NvMultiObjectTracker] Load engine failed. Create engine again.
WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks