Unable to Run TLT resnet18 weights on deepstream4.0.1 test app 2

Hi,
I want to test my resnet18_detector.trt file on deepstream test 2 application but unable to run getting error like

ubuntu@ubuntu-B365M-D3H:~/Downloads/DeepStreamSDK-Tesla-v3.0/DeepStream_Release/sources/apps/sample_apps/deepstream-test2$ ./deepstream-test2-app ../../../../samples/streams/sample_720p.h264 
Plugin Creator registration succeeded - GridAnchor_TRT
Plugin Creator registration succeeded - NMS_TRT
Plugin Creator registration succeeded - Reorg_TRT
Plugin Creator registration succeeded - Region_TRT
Plugin Creator registration succeeded - Clip_TRT
Plugin Creator registration succeeded - LReLU_TRT
Plugin Creator registration succeeded - PriorBox_TRT
Plugin Creator registration succeeded - Normalize_TRT
Plugin Creator registration succeeded - RPROI_TRT
Plugin Creator registration succeeded - BatchedNMS_TRT
Now playing: ../../../../samples/streams/sample_720p.h264
>>> Generating new TRT model engine
Using INT8 data type.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
writeCalibrationCache is called!

 ***** Storing serialized engine file as /home/ubuntu/Downloads/DeepStreamSDK-Tesla-v3.0/DeepStream_Release/sources/apps/sample_apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine batchsize = 16 *****

>>> Generating new TRT model engine
Using INT8 data type.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
writeCalibrationCache is called!

 ***** Storing serialized engine file as /home/ubuntu/Downloads/DeepStreamSDK-Tesla-v3.0/DeepStream_Release/sources/apps/sample_apps/deepstream-test2/../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_int8.engine batchsize = 16 *****

>>> Generating new TRT model engine
Using INT8 data type.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
writeCalibrationCache is called!

 ***** Storing serialized engine file as /home/ubuntu/Downloads/DeepStreamSDK-Tesla-v3.0/DeepStream_Release/sources/apps/sample_apps/deepstream-test2/../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine batchsize = 16 *****

>>> Generating new TRT model engine
Using INT8 data type.
Error: Model files not provided
>>> Error while building network
Running...
ERROR from element primary-nvinference-engine: Failed to initialize infer context
Error details: gstnvinfer.c(2141): gst_nv_infer_start (): /GstPipeline:dstest2-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
Deleting pipeline

my configuration file is:

[property]
gpu-id=0
#net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
#labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
##int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt4.bin

model-file=../../../../samples/models/HeadDetection/experiment_dir_final_train_4feb/resnet18_detector.trt
labelfile-path=../../../../samples/models/HeadDetection/experiment_dir_final_train_4feb/labels.txt
int8-calib-file=../../../../samples/models/HeadDetection/experiment_dir_final_train_4feb/calibration.bin

batch-size=12
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
parse-func=4
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

But i am able to run these weight on deepstream-app the problem is only with the deepstream test app. please help me out.

Which Jetson platform did you run?

Hi morganh,
I am running it on dGPU 2080ti.

sorry morganh to disturb you again.
issue resolved i did not change model-file with model-engine-file when using trt.

Thanks.

Thanks for the info. I’m closing this topic.