Reid model error in TRT


I have tried to run a Reid.caffemodel on trtexec to save engine but I got an error as below.Kindly help .


TensorRT Version: 7.0
GPU Type: T4
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: 18.04

e]0;root@GokuDDG-B85M-D3H: /usr/src/tensorrt/binae[01;32mroot@GokuDDG-B85M-D3He[00m:e[01;34m/usr/src/tensorrt/bine[00m# sudo ./trtexec --deploy=deploy.prototxt --model=reid.caffemodel --output=View_1 --batch=4 --saveEngine=reid.trt e[1Pe[1@1e[1@6
&&&& RUNNING TensorRT.trtexec # ./trtexec --deploy=deploy.prototxt --model=reid.caffemodel --output=View_1 --batch=16 --saveEngine=reid.trt
[07/03/2020-10:32:38] [I] === Model Options ===
[07/03/2020-10:32:38] [I] Format: Caffe
[07/03/2020-10:32:38] [I] Model: reid.caffemodel
[07/03/2020-10:32:38] [I] Prototxt: deploy.prototxt
[07/03/2020-10:32:38] [I] Output: View_1
[07/03/2020-10:32:38] [I] === Build Options ===
[07/03/2020-10:32:38] [I] Max batch: 16
[07/03/2020-10:32:38] [I] Workspace: 16 MB
[07/03/2020-10:32:38] [I] minTiming: 1
[07/03/2020-10:32:38] [I] avgTiming: 8
[07/03/2020-10:32:38] [I] Precision: FP32
[07/03/2020-10:32:38] [I] Calibration:
[07/03/2020-10:32:38] [I] Safe mode: Disabled
[07/03/2020-10:32:38] [I] Save engine: reid.trt
[07/03/2020-10:32:38] [I] Load engine:
[07/03/2020-10:32:38] [I] Inputs format: fp32:CHW
[07/03/2020-10:32:38] [I] Outputs format: fp32:CHW
[07/03/2020-10:32:38] [I] Input build shapes: model
[07/03/2020-10:32:38] [I] === System Options ===
[07/03/2020-10:32:38] [I] Device: 0
[07/03/2020-10:32:38] [I] DLACore:
[07/03/2020-10:32:38] [I] Plugins:
[07/03/2020-10:32:38] [I] === Inference Options ===
[07/03/2020-10:32:38] [I] Batch: 16
[07/03/2020-10:32:38] [I] Iterations: 10
[07/03/2020-10:32:38] [I] Duration: 3s (+ 200ms warm up)
[07/03/2020-10:32:38] [I] Sleep time: 0ms
[07/03/2020-10:32:38] [I] Streams: 1
[07/03/2020-10:32:38] [I] ExposeDMA: Disabled
[07/03/2020-10:32:38] [I] Spin-wait: Disabled
[07/03/2020-10:32:38] [I] Multithreading: Disabled
[07/03/2020-10:32:38] [I] CUDA Graph: Disabled
[07/03/2020-10:32:38] [I] Skip inference: Disabled
[07/03/2020-10:32:38] [I] Input inference shapes: model
[07/03/2020-10:32:38] [I] Inputs:
[07/03/2020-10:32:38] [I] === Reporting Options ===
[07/03/2020-10:32:38] [I] Verbose: Disabled
[07/03/2020-10:32:38] [I] Averages: 10 inferences
[07/03/2020-10:32:38] [I] Percentile: 99
[07/03/2020-10:32:38] [I] Dump output: Disabled
[07/03/2020-10:32:38] [I] Profile: Disabled
[07/03/2020-10:32:38] [I] Export timing to JSON file:
[07/03/2020-10:32:38] [I] Export output to JSON file:
[07/03/2020-10:32:38] [I] Export profile to JSON file:
[07/03/2020-10:32:38] [I]
Weights for layer ConvNd_1 doesn’t exist
[07/03/2020-10:32:39] [E] [TRT] CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer ConvNd_1 doesn’t exist
[07/03/2020-10:32:39] [E] [TRT] CaffeParser: ERROR: Attempting to access NULL weights
[07/03/2020-10:32:39] [E] [TRT] ConvNd_1: second input must be provided if kernel weights are empty.
[07/03/2020-10:32:39] [E] [TRT] ConvNd_1: kernel weights has count 0 but 9408 was expected
[07/03/2020-10:32:39] [E] [TRT] ConvNd_1: count of 0 weights in kernel, but kernel dimensions (7,7) with 3 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 3 * 7*7 * 64 / 1 = 9408
trtexec: ./parserHelper.h:99: nvinfer1::DimsCHW parserhelper::getCHW(const nvinfer1::Dims&): Assertion `d.nbDims >= 3’ failed.
e]0;root@GokuDDG-B85M-D3H: /usr/src/tensorrt/binae[01;32mroot@GokuDDG-B85M-D3He[00m:e[01;34m/usr/src/tensorrt/bine[00m# exit

Script done on 2020-07-03 10:32:44+0530

Hi @GalibaSashi,
From TRT 7 Caffe and UFF parser are deprecated.
Please try caffe-> onnx -> TRT workflow.
If the issue persist, kindly share your model and script.

Hi @AakankshaS
If I want to implement on Deepstream 5.0 will it be the same method or is another used?
Kindly comment on the same.Thanks in advance.

Hi @GalibaSashi,
Please check the below link for reference.

In case of further query related to Deepstream, kindly use Deepstream forum.

Hi @AakankshaS,
This is tlt based right which is not our case. When I am giving it as caffemodel and prototxt this will be an issue wont it will become an issue in TRT which will then definitely give an issue in Deepstream right.
Thanks in advance

Hi @GalibaSashi,
Request you to share your model file, so that we can help you better.

Hi @AakankshaS,
Is there any difference in .trt and .engine files.Can we give eighter in the config file

Hi @GalibaSashi,

Apologies for late response.
You can use any of the conventions, content of file will be optimized TRT model file only.