For the first time to question. Please understand that I am not good at English.
I am currently confirming the processing performance of the detection model on Jetson Nano+Deepstream.
The target model is “SSD Lite Mobilenet V2”. (Http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz)
Previously, I was able to confirm the operation of “SSD Mobilenet V2” by using “Deepstream 4.0”. (I referred to “How to use ssd_mobilenet_v2 - #3 by AastaLLL”.)
However, it didn’t work well with SSD Lite, so I’m using “Deepstream 5.0” to verify again.
Is it possible to do “SSD Lite Mobilenet V2” with Jetson Nano+Deepstream in the first place?
Please tell me how if possible.
Thank you for your reply.
I understand that ssdlite_mobilenet_v2 is not supported.
I’ve tried the information I received before, but it worked with normal ssd_mobilenet_v2 but not with ssdlite_mobilenet_v2.
The processing performance of ssd_mobilenet_v2 was about 16-18FPS.
Please tell me two points.
・How to convert ssdlite_mobilenet_v2 to UFF model
・In the Jetson Nano benchmark (Jetson Benchmarks | NVIDIA Developer), the processing performance of ssd_mobilenet_v2 is 39FPS. What could be the cause of this difference?
Thank you for your answer.
I was able to create a UFF file, but the following error occurred in Deepstream.
(deepstream-app:12107): GStreamer-WARNING **: 18:06:04.822: Name 'src_cap_filter' is not unique in bin 'src_sub_bin0', not adding
Using winsys: x11
Creating LL OSD context new
0:00:01.023654407 12107 0x2f79fca0 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
deepstream-app: nmsPlugin.cpp:139: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * numLocClasses * 4 == inputDims[param.inputOrder[0]].d[0]' failed.