For the first time to question. Please understand that I am not good at English.
I am currently confirming the processing performance of the detection model on Jetson Nano+Deepstream.
The target model is “SSD Lite Mobilenet V2”. ( Http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz)
Previously, I was able to confirm the operation of “SSD Mobilenet V2” by using “Deepstream 4.0”. (I referred to “ How to use ssd_mobilenet_v2”.)
However, it didn’t work well with SSD Lite, so I’m using “Deepstream 5.0” to verify again.
Is it possible to do “SSD Lite Mobilenet V2” with Jetson Nano+Deepstream in the first place?
Please tell me how if possible.
• Hardware Platform ：Jetson Nano
• DeepStream Version：5.0
• JetPack Version ：4.4
• TensorRT Version：7.0
We don’t officially support
But you can give it a try with the information shared here:
We can run ssd_mobilenet_v2 with deepstream-app successfully.
Here are our steps for your reference:
1. Compile objectDetector_SSD sample:
$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD
$ make -C nvdsinfer_custom_impl_ssd
2. Prepare ssd_mobilenet_v2 uff model:
$ wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
$ tar zxvf ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Download attached config.py and generate uff model w…
Thank you for your reply.
I understand that ssdlite_mobilenet_v2 is not supported.
I’ve tried the information I received before, but it worked with normal ssd_mobilenet_v2 but not with ssdlite_mobilenet_v2.
The processing performance of ssd_mobilenet_v2 was about 16-18FPS.
Please tell me two points.
・How to convert ssdlite_mobilenet_v2 to UFF model
・In the Jetson Nano benchmark ( https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks), the processing performance of ssd_mobilenet_v2 is 39FPS. What could be the cause of this difference?
Sorry for the late update.
1. For ssd based model, you can convert it via this command:
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o [output].uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py
2. The benchmark results you shared is based on pure TensorRT inference time.
But the fps from Deepstream includes whole pipeline as following:
Thank you for your answer.
I was able to create a UFF file, but the following error occurred in Deepstream.
(deepstream-app:12107): GStreamer-WARNING **: 18:06:04.822: Name 'src_cap_filter' is not unique in bin 'src_sub_bin0', not adding
Using winsys: x11
Creating LL OSD context new
0:00:01.023654407 12107 0x2f79fca0 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
deepstream-app: nmsPlugin.cpp:139: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * numLocClasses * 4 == inputDims[param.inputOrder].d' failed.
Please update the class number based on your model.
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",