Imagenet-camera.cpp libraries not working on jetson nano

Yep, that’s correct, run them from your jetson-inference/build directory.

I did it. Again black screen. What I do not understand is this:

I have 2 webcams. When I execute the commands:

$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-devices --list-formats

I get information for the one camera, but I do not get information for the other camera. So it means that nano recognizes the first camera. Right? How is it then and I get a black screen and not picture?

Hmm. What happens if you only plug in one of the webcams at once?

Sorry, maybe I didn’t explain correctly. I plug the first camera, I run the commands, it shows information, I run

./imagenet-camera googlenet

and shows black screen, then I unplug it.

I plug the second camera, I run the commands, it doesn’t show any information. I don’t run

./imagenet-camera googlenet

Then I unplug it.

Oh I see, thank you. What model USB cameras are you using, and which is the one that doesn’t display any info?

Also can you post the v4l2-ctl outputs for the camera that does have info? Thanks.

Sorry for not responding faster…

Here is what I get:

nobody1@nobody1-desktop:~$ v4l2-ctl --list-devices --list-formats
USB Camera (10fd:0128) (usb-70090000.xusb-2.4):
	/dev/video0

ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'JPEG' (compressed)
	Name        : JFIF JPEG

nobody1@nobody1-desktop:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'JPEG' (compressed)
	Name        : JFIF JPEG
		Size: Discrete 320x240
		Size: Discrete 640x480

nobody1@nobody1-desktop:~$

The camera I use for the above results is a Silvercrest brand, too old (year 2008).
The other that doesn’t show results I bought it from ebay. It is chinese.

What should I do?

Thank you

Someone to help?

Sorry for the delay - from the v4l2-ctl output that you posted above, your camera only supports the transmission of compressed JPEG frames. USB webcams that are supported require raw RGB or YUV formats.

I would recommend that you pick up a camera like this, for example Logitech C270 or C920 or others that you can confirm have modes that support uncompressed RGB or YUV video (it would seem that many do).

Alternatively, of you are getting a new camera, might just want to go for Raspberry Pi Camera Module v2, it’s a MIPI CSI sensor that is supported on Nano and has better performance than USB webcam. Note the v2 model uses IMX219 sensor supported by Nano, not v1.

I run the command:

./imagenet-camera.py

and I get

mojito@mojito-desktop:~/NN/jetson-inference/build/aarch64/bin$ ./imagenet-camera.py
jetson.inference.__init__.py
jetson.inference -- initializing Python 2.7 bindings...
jetson.inference -- registering module types...
jetson.inference -- done registering module types
jetson.inference -- done Python 2.7 binding initialization
jetson.utils.__init__.py
jetson.utils -- initializing Python 2.7 bindings...
jetson.utils -- registering module functions...
jetson.utils -- done registering module functions
jetson.utils -- registering module types...
jetson.utils -- done registering module types
jetson.utils -- done Python 2.7 binding initialization
jetson.inference -- PyTensorNet_New()
jetson.inference -- PyImageNet_Init()
jetson.inference -- imageNet loading network using argv command line params
jetson.inference -- imageNet.__init__() argv[0] = './imagenet-camera.py'

imageNet -- loading classification network model from:
         -- prototxt     networks/googlenet.prototxt
         -- model        networks/bvlc_googlenet.caffemodel
         -- class_labels networks/ilsvrc12_synset_words.txt
         -- input_blob   'data'
         -- output_blob  'prob'
         -- batch_size   1

[TRT]   TensorRT version 5.1.6
[TRT]   loading NVIDIA plugins...
[TRT]   Plugin Creator registration succeeded - GridAnchor_TRT
[TRT]   Plugin Creator registration succeeded - NMS_TRT
[TRT]   Plugin Creator registration succeeded - Reorg_TRT
[TRT]   Plugin Creator registration succeeded - Region_TRT
[TRT]   Plugin Creator registration succeeded - Clip_TRT
[TRT]   Plugin Creator registration succeeded - LReLU_TRT
[TRT]   Plugin Creator registration succeeded - PriorBox_TRT
[TRT]   Plugin Creator registration succeeded - Normalize_TRT
[TRT]   Plugin Creator registration succeeded - RPROI_TRT
[TRT]   Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - caffe  (extension '.caffemodel')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT]   loading network profile from engine cache... networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT]   device GPU, networks/bvlc_googlenet.caffemodel loaded
[TRT]   device GPU, CUDA engine context initialized with 2 bindings
[TRT]   binding -- index   0
               -- name    'data'
               -- type    FP32
               -- in/out  INPUT
               -- # dims  3
               -- dim #0  3 (CHANNEL)
               -- dim #1  224 (SPATIAL)
               -- dim #2  224 (SPATIAL)
[TRT]   binding -- index   1
               -- name    'prob'
               -- type    FP32
               -- in/out  OUTPUT
               -- # dims  3
               -- dim #0  1000 (CHANNEL)
               -- dim #1  1 (SPATIAL)
               -- dim #2  1 (SPATIAL)
[TRT]   binding to input 0 data  binding index:  0
[TRT]   binding to input 0 data  dims (b=1 c=3 h=224 w=224) size=602112
[TRT]   binding to output 0 prob  binding index:  1
[TRT]   binding to output 0 prob  dims (b=1 c=1000 h=1 w=1) size=4000
device GPU, networks/bvlc_googlenet.caffemodel initialized.
[TRT]   networks/bvlc_googlenet.caffemodel loaded
imageNet -- loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
jetson.utils -- PyFont_New()
jetson.utils -- PyFont_Init()
jetson.utils -- PyCamera_New()
jetson.utils -- PyCamera_Init()
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0
jetson.utils -- PyDisplay_New()
jetson.utils -- PyDisplay_Init()
[OpenGL] glDisplay -- X screen 0 resolution:  1280x1024
[OpenGL] glDisplay -- display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
[gstreamer] gstreamer msg stream-start ==> pipeline0
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
Segmentation fault (core dumped)
mojito@mojito-desktop:~/NN/jetson-inference/build/aarch64/bin$

How do I fix this??

I use the C270 camera!

Hi,

It looks like you are using a USB camera.
If yes, please run this sample with an customized camera config like this:

./imagenet-camera.py --camera=/dev/video0

Thanks.

i get the same segmentation fault. I’m using the 32.2.3 Nano image. I was able to get the USB camera to run the imagenet-camera py script on the first day but it crash like the above on the second try and haven’t been able to see it run again. Using a lifecam hd-3000 USB camera. Do I need to clean out a folder and rebuild something or how whould I debug this?

Hi doogco, what is the command line you are launching imagenet-camera.py with? Since it works the first time with your camera, perhaps it is related to the camera’s driver. What happens if you shutdown your Nano, unplug and plug-in your camera again, and then boot up?

What is the terminal output of these commands for your camera? Thanks.

$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-devices --list-formats
$ v4l2-ctl -d /dev/video0 --list-formats-ext

my cmd line is: ./imagenet-camera.py –-camera=/dev/video0

$ v4l2-ctl --list-devices --list-formats
Microsoft® LifeCam HD-3000 (usb-70090000.xusb-2.1):
/dev/video0

ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘YUYV’
Name : YUYV 4:2:2

Index       : 1
Type        : Video Capture
Pixel Format: 'MJPG' (compressed)
Name        : Motion-JPEG

$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘YUYV’
Name : YUYV 4:2:2
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
etc

Hope this helps.
dg

Hi doogco, just checking back in one this issue - are you able to see video again from your webcam after rebooting, or powering off and unplugging/re-plugging in?

Perhaps the device file for the camera changed. Check the number of devices you have with “ls /dev/video*”

Sometimes if the camera gets disconnected, it would then get assigned to the next device (i.e. /dev/video1, ect) You would then need to supply that new device filename to the --camera argument (for example, imagenet-camera.py --camera=/dev/video1)

Hi, unplugging. plugging didn’t help. reboot doesn’t. video0 is all that is present. Is there anything to learn from the core dump? I run any simple experiments to learn what driver to focus on? I’m just using a 10.5 USB power supply, could I be starving something or running slow?
thanks

I suppose that could be possible, are you able to try plugging the camera into a powered USB hub or with one of the power supplies listed here (preferably a 5V⎓4A DC barrel jack adapter)?

Are you able to view the video through a viewer like cheese, or if you plug the webcam into another PC does it work? Strange that it did work, and then stopped…

hi again, i tried cheese and the camera fires right up so I don’t think it’s power related.
what causes segmentation faults? is there anything in the core dump that will tell me anything? Should I reinstall anything? Or should I skip this and try a different example?
dco

Does the segfault look like this?

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
Segmentation fault (core dumped)

If so, MIPI CSI camera is trying to be used, but none is connected. Are you launching the program with the following flags, which will use V4L2 instead?

./imagenet-camera.py --camera=/dev/video0

That is the seg fault I see. I indeed use the syntax on line 1. above. Slide 33 above shows the result of running the v4l2-ctl command. should I point to a different driver? Cheese works with my camera with no flags.
should I move to some other example program?

If you are specifying that --camera=/dev/video0 flag, you shouldn’t see the GST_ARGUS segfault because it shouldn’t be using Argus with that camera flags. Nonetheless, the GStreamer pipeline that was used should be printed out by the application higher up in the terminal output - can you post that?