connecting Logitech usb webcam with jetson nano

please tell me the steps to follow. I am using Linux ubuntu.

Hi,
You may refer to
[url]Logitech C930e on Jetson TX1 very slow and choppy video - Jetson TX1 - NVIDIA Developer Forums

Hi DaneLLL,

Thanks. From Linux ubuntu terminal, I can connect to USB camera now after installing V4L-utils package.

My Jetson Nano DetectNet-Camera program still not detecting USB cam Logitech C922.
I don’t know how to set the default camera to USB instead of CSI Rasberry Pi.

Can share with me some running source code?

I am new to Nvidia jetson nano product.
Please, Help.

Regards,
Rajesh
Thailand

Dear All

I get same issue. My USB Logitech webcam done well when use Cheese Webcame Booth. But when run camera module of Jetbot , camera dont’t display.

Please help

THanks,

And I also get the same issue. My USB Logitech webcam done well when use Cheese Webcame Booth. But when run camera module, camera dont’t display. My camera is Logitech C170.
when I use

./imagenet-camera

then I got

[gstreamer] initialized gstreamer, version 1.14.4.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0

imagenet-camera:  successfully initialized camera device
    width:  1280
   height:  720
    depth:  12 (bpp)


imageNet -- loading classification network model from:
         -- prototxt     networks/googlenet.prototxt
         -- model        networks/bvlc_googlenet.caffemodel
         -- class_labels networks/ilsvrc12_synset_words.txt
         -- input_blob   'data'
         -- output_blob  'prob'
         -- batch_size   1

[TRT]   TensorRT version 5.0.6
[TRT]   loading NVIDIA plugins...
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - caffe  (extension '.caffemodel')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT]   loading network profile from engine cache... networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT]   device GPU, networks/bvlc_googlenet.caffemodel loaded
[TRT]   device GPU, CUDA engine context initialized with 2 bindings
[TRT]   binding -- index   0
               -- name    'data'
               -- type    FP32
               -- in/out  INPUT
               -- # dims  3
               -- dim #0  3 (CHANNEL)
               -- dim #1  224 (SPATIAL)
               -- dim #2  224 (SPATIAL)
[TRT]   binding -- index   1
               -- name    'prob'
               -- type    FP32
               -- in/out  OUTPUT
               -- # dims  3
               -- dim #0  1000 (CHANNEL)
               -- dim #1  1 (SPATIAL)
               -- dim #2  1 (SPATIAL)
[TRT]   binding to input 0 data  binding index:  0
[TRT]   binding to input 0 data  dims (b=1 c=3 h=224 w=224) size=602112
[TRT]   binding to output 0 prob  binding index:  1
[TRT]   binding to output 0 prob  dims (b=1 c=1000 h=1 w=1) size=4000
device GPU, networks/bvlc_googlenet.caffemodel initialized.
[TRT]   networks/bvlc_googlenet.caffemodel loaded
imageNet -- loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0

imagenet-camera:  camera open for streaming
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
Segmentation fault (core dumped)

The same problem, as member 312529167, I am facing.
Any solution?

Hi Rajesh, 312529167 - see the command line parameters to imagenet-camera here:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-camera-2.md
(or run the program with --help to see usage info)

You want to specify the --camera option and provide it the V4L2 node you want to use (e.g. /dev/video0)
For example:

$ imagenet-camera --camera=/dev/video0

You can check which V4L2 /dev/video node to use with these commands:

$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext

Note that the usage of the --camera flag is the same for detectnet-camera program as well.

1 Like

Thanks​ dusty_nv sir. I​ hope​ it would​ work.

Dusty_nv sir, Still same result.

I used your suggested coding.
But no change in camera initialize. Still using default CSI raspberry pi camera.

Please see the camera initialize code:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0

For USB camera, may be V4L2 should be included while initializing. But NVARGUS and nvarguscamerasrc is in output.

My camera model is Logitech C922 and I am plugged it in Jetson Nano USB3.0 slot.

Can you please share running source code?

Dear Dusty_nv sir,

I done it.

Steps:

  1. Installed fresh copy of jetson-inference from GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
  2. Given path on ubuntu root - jetson-inference/build/aarch64/bin
  3. Run command - ./imagenet-camera --camera=/dev/video1
  4. My USB camera Logitech C922 is connected to jetson nano along with Raspberry Pi camera so is used
    USB camera name as /dev/video1 (if USB camera is alone than can use /dev/video0)
  5. Raspberry Pi camera already working fine with all examples.

Detectnet-camera example also worked fine with USB camera.

Thanks a lot.

Regards,
Rajesh

1 Like