Running SSD on Jetson Nano

I am not able to install the BatchTile Plugin using the open source Tensor RT github; are there any proper steps outlining how to do it properly. This is a pre req to run ssd correctly on the jetson nano.

What is the issue now?

Please follow steps in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson

After successfully converting to an engine file, when I run the deepstream command I am faced with this error:

** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running

NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
#assertion/home/nvidia/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,118
Aborted (core dumped)

Several question?

  1. Which device did you use, nano?
  2. Did you use Jetpack4.4 to install DS5, cuda10.2, cudnn8 and TRT7?
  3. Did you follow https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson ? And also please care about GPU_ARCHS. For TX1 / NANO , it should be 53

I used the nano and followed those exact steps as mentioned in your 3 questions; I even re did the installation; while it successfully converts into an .engine - the same error comes up when i run SSD:

#assertion/home/nvidia/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,118
Aborted (core dumped)

Please paste below. Thanks.

  1. $ ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin*
  2. If you use tlt-converter to generate the trt engine. Please paste the full command here.
  3. Your full deepstream command and its full log
  4. The config file of deepstream.

Another important thing, please make sure your nano has enough disk space when generate trt engine.

Below is the requested information

1:

0 lrwxrwxrwx 1 root root 26 Apr 15 23:34 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.0*
4.5M -rw-r–r-- 1 ishan ishan 4.5M Jun 17 12:31 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7
0 lrwxrwxrwx 1 root root 26 Jun 18 00:03 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.1.0*
3.7M -rwxr-xr-x 1 root root 3.7M Jun 18 00:03 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*
7.7M -rw-r–r-- 1 root root 7.7M Apr 15 23:34 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin_static.a

2:
I used deepstream to automatically convert into an engine.

3:

Command:

deepstream-app -c deepstream_app_config_detect.txt

Log:

Opening in BLOCKING MODE

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:00:00.305466574 4182 0x3cfd1a10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:01:06.018969732 4182 0x3cfd1a10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /home/ishan/Documents/ssd/ssd_mobile_final.etlt_b1_gpu0_fp32.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT Input 3x320x576
1 OUTPUT kFLOAT NMS 1x200x7
2 OUTPUT kFLOAT NMS_1 1x1x1

0:01:07.031271753 4182 0x3cfd1a10 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/ishan/Documents/ssd/pgie_ssd_tlt_config.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running

NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
#assertion/home/nvidia/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,118
Aborted (core dumped)

deepstream_app_config_detect.txt (3.0 KB) pgie_ssd_tlt_config.txt (2.5 KB)

So, I am afraid there is something wrong when you replace the libnvinfer_plugin.
Refer to my post Failling in building sample from TLT-DEEPSTREAM

Thanks; got it working.