use resnet18_detector.trt in custom python code

Hi,
is there any python custom code other than deepstream sdk where i can run live object detection on resnet18_detector.trt. because when i am using these weight in deepstream sdk then getting errors which i have methion here https://devtalk.nvidia.com/default/topic/1070315/deepstream-sdk/error-while-running-resnet18_detector-trt-weights-with-deepstream-sdk/

Hi pritam,
The resnet18_detector.trt engine should be workable in deepstream sdk.
What is your TensorRT version?
$ dpkg -l | grep nvinfer

hello morganh,
I am using TensorRT Version 6.0.1

Getting error with DS is

ubuntu@ubuntu-B365M-D3H:~/Downloads/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1$ ./deepstream-test1-app ../../../../samples/streams/test.h264 
Now playing: ../../../../samples/streams/test.h264
Creating LL OSD context new
0:00:00.282033950 16065 0x562f68436060 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:00.282194435 16065 0x562f68436060 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): No model files specified
0:00:00.282204197 16065 0x562f68436060 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:00.282217761 16065 0x562f68436060 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:00.282221049 16065 0x562f68436060 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

Config file is :

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/HeadDetection/resnet18_detector.trt
#proto-file=../../../../samples/models/HeadDetection/resnet10.prototxt
labelfile-path=../../../../samples/models/HeadDetection/labels.txt
int8-calib-file=../../../../samples/models/HeadDetection/calibration.bin
batch-size=1
network-mode=1
num-detected-classes=2
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

Please install TensorRT 5.1 GA in your Jestson device and try again.
See tlt user guide chapter 11, we make sure trt 5.1 versions works.

Hello Morganh thanks for response,
Actually I am using NVIDIA GTX 2080TI. so this version will also work on my system ?
and also one question that is if i train my model on 2080ti and want to run that also on jetson nano so it will work ?

Actually I had installed TensorRT 6.0.1 because in DS-4.0.2 it was mention that TensorRT version should be >=6.0.1

•Ubuntu 18.04
•Gstreamer 1.14.1
•NVIDIA driver 418+
•CUDA 10.1
•TensorRT 6.0.1 or later

Hi pritam,
You can run docker with 2080ti, run training in the docker. About the requirement of host pc, please see tlt user guide’s chapter 2.
In the host PC, you run the docker, train your own data, get the tlt model, export the etlt model.

What I mentioned in previous comment is only for jetson device instead of host pc. You should install tensorrt 5.1 GA instead of tensrorrt 6.
That also means the lower version of DS is needed.

More question,are you running DS in host PC or Jetson device?

I am Running DS on both jetson nano as well as on host pc.

Hello morganh,
Yes I have follow the step till Int8 Optimization in tlt detectnetv2 on jupyter notebook and able to Generate TensorRT engine for host pc but unable to run on DS4 so i will try it on DS3 with tensorRT 5.1 but on jetson nano I am not able to generate .trt file getting error like

[ERROR] UffParser: Could not open /tmp/filesY6zOT
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

I have also follow your suggestion which you have mention on https://devtalk.nvidia.com/default/topic/1067539/transfer-learning-toolkit/tlt-converter-on-jetson-nano-error-/
I am giving right API key but getting same issue.
please help where I am going wrong.

Hi pritam,
If you want to run trt engine in Jetson nano, please copy etlt model from your host PC to nano, then download Jetson-platform-version tlt-converter to nano, run tlt-converter in nano.
Then to see if trt engine can be built successfully.
If error still exists, please double check the $key, etc.

Or you can use etlt model directly in DS.
See https://devtalk.nvidia.com/default/topic/1070464/transfer-learning-toolkit/use-of-deepstream-4-0-2-tlt-encoded-model-to-avoid-using-tlt-converter/
and https://devtalk.nvidia.com/default/topic/1065558/transfer-learning-toolkit/trt-engine-deployment/

Thanks Morganh for response. I will try this.