Can't use SSD-Mobilenet-v2

Hi,

I have install jetson-inference as follows:

git clone -b python https://github.com/dusty-nv/jetson-inference jetson-inference-python
cd jetson-inference-python
git submodule update --init
sudo apt-get install libpython-dev python-numpy libpython3-dev python3-numpy
mkdir build
cd build
cmake ../
make
sudo make install
sudo ldconfig

I have successful tried GoogleNet:

cd aarch64/bin
python my-recognition.py --network=googlenet black_bear.jpg

I have checked SSD-Mobilenet-v2 during installation.
I try it as follow:

./detectnet-console.py peds-002.jpg output.jpg

I changed line 37 in detectnet-console.py with:

parser.add_argument("--network", type=str, default="SSD-Mobilenet-v2", help="pre-trained model to load, see below for options")

But, when I try to run it, I got error as follows:

Traceback (most recent call last):
  File "./detectnet-console.py", line 55, in <module>
    net = jetson.inference.detectNet(opt.network, argv, opt.threshold)
Exception: jetson.inference -- detectNet invalid built-in network was requested

Does anyone know the solution?
Thank you

I’m sorry, I tried the wrong code.
I tried this and succeeded.

import jetson.inference
import jetson.utils

import argparse
import sys

# parse the command line
parser = argparse.ArgumentParser(description="Locate objects in an image using an object detection DNN.", 
						   formatter_class=argparse.RawTextHelpFormatter, epilog=jetson.inference.detectNet.Usage())

parser.add_argument("file_in", type=str, help="filename of the input image to process")
parser.add_argument("file_out", type=str, default=None, nargs='?', help="filename of the output image to save")
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")
parser.add_argument("--overlay", type=str, default="box,labels,conf", help="detection overlay flags (e.g. --overlay=box,labels,conf)\nvalid combinations are:  'box', 'labels', 'conf', 'none'")
parser.add_argument("--threshold", type=float, default=0.5, help="minimum detection threshold to use")

try:
	opt = parser.parse_known_args()[0]
except:
	print("")
	parser.print_help()
	sys.exit(0)

# load an image (into shared CPU/GPU memory)
img, width, height = jetson.utils.loadImageRGBA(opt.file_in)

# load the object detection network
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)

# detect objects in the image (with overlay)
detections = net.Detect(img, width, height, opt.overlay)

# print the detections
print("detected {:d} objects in image".format(len(detections)))

for detection in detections:
	print(detection)

# print out timing info
net.PrintProfilerTimes()

# save the output image with the bounding box overlays
if opt.file_out is not None:
	jetson.utils.saveImageRGBA(opt.file_out, img, width, height)

But, I got this error:

Traceback (most recent call last):
  File "test_net.py", line 31, in <module>
    detections = net.Detect(img, width, height, opt.overlay)
Exception: jetson.inference -- detectNet.Detect() failed to parse args tuple
jetson.utils -- freeing CUDA mapped memory
PyTensorNet_Dealloc()

Hi,

The error indicates that some input argument is somehow can not be parsed.
May I know the argument you are passing? Or are you using the default value?

More, could you help to print out the value of img, width, height, opt.overlay for us debugging?

Thanks.

Hi, AastaLLL

This is my output:

nvidia@tegra-ubuntu:~$ python test_net.py peds-001.jpg output.jpg
jetson.inference.__init__.py
jetson.inference -- initializing Python 2.7 bindings...
jetson.inference -- registering module types...
jetson.inference -- done registering module types
jetson.inference -- done Python 2.7 binding initialization
jetson.utils.__init__.py
jetson.utils -- initializing Python 2.7 bindings...
jetson.utils -- registering module functions...
jetson.utils -- done registering module functions
jetson.utils -- registering module types...
jetson.utils -- done registering module types
jetson.utils -- done Python 2.7 binding initialization
[image] loaded 'peds-001.jpg'  (1920 x 1080, 3 channels)
1920
1080
jetson.inference -- PyTensorNet_New()
jetson.inference -- PyDetectNet_Init()
jetson.inference -- detectNet loading network using argv command line params
jetson.inference -- detectNet.__init__() argv[0] = 'test_net.py'
jetson.inference -- detectNet.__init__() argv[1] = 'peds-001.jpg'
jetson.inference -- detectNet.__init__() argv[2] = 'output.jpg'

detectNet -- loading detection network model from:
          -- prototxt     networks/ped-100/deploy.prototxt
          -- model        networks/ped-100/snapshot_iter_70800.caffemodel
          -- input_blob   'data'
          -- output_cvg   'coverage'
          -- output_bbox  'bboxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels networks/ped-100/class_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]   TensorRT version 4.0.2
[TRT]   detected model format - caffe  (extension '.caffemodel')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT]   loading network profile from engine cache... /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT]   device GPU, /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel loaded
[TRT]   device GPU, CUDA engine context initialized with 3 bindings
[TRT]   binding -- index   0
               -- name    'data'
               -- type    FP32
               -- in/out  INPUT
               -- # dims  3
               -- dim #0  3 (CHANNEL)
               -- dim #1  512 (SPATIAL)
               -- dim #2  1024 (SPATIAL)
[TRT]   binding -- index   1
               -- name    'coverage'
               -- type    FP32
               -- in/out  OUTPUT
               -- # dims  3
               -- dim #0  1 (CHANNEL)
               -- dim #1  32 (SPATIAL)
               -- dim #2  64 (SPATIAL)
[TRT]   binding -- index   2
               -- name    'bboxes'
               -- type    FP32
               -- in/out  OUTPUT
               -- # dims  3
               -- dim #0  4 (CHANNEL)
               -- dim #1  32 (SPATIAL)
               -- dim #2  64 (SPATIAL)
[TRT]   binding to input 0 data  binding index:  0
[TRT]   binding to input 0 data  dims (b=1 c=3 h=512 w=1024) size=6291456
[TRT]   binding to output 0 coverage  binding index:  1
[TRT]   binding to output 0 coverage  dims (b=1 c=1 h=32 w=64) size=8192
[TRT]   binding to output 1 bboxes  binding index:  2
[TRT]   binding to output 1 bboxes  dims (b=1 c=4 h=32 w=64) size=32768
device GPU, /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel initialized.
detectNet -- number object classes:   1
detectNet -- maximum bounding boxes:  2048
detectNet -- loaded 1 class info entries
detectNet -- number of object classes:  1
Traceback (most recent call last):
  File "test_net.py", line 32, in <module>
    detections = net.Detect(img, width, height, opt.overlay)
Exception: jetson.inference -- detectNet.Detect() failed to parse args tuple
jetson.utils -- freeing CUDA mapped memory
PyTensorNet_Dealloc()

And the image can be loaded:

[image] loaded 'peds-001.jpg'  (1920 x 1080, 3 channels)
1920
1080

I use Jetpack 3.2. Should I change to Jetpack 4.3?
Thank you

Hi,

Sorry for the late.

If you are using JetPack 3.2, you will need to checkout branch L4T-R28.2.
However, I don’t think jetson_inference support ssd_mobilenet at that time.

So it’s recommended to reflash the device with JetPack4.3 directly.
Thanks.