Using webcam on Jetson Nano running on YOLO

Hi!

I am currently trying to run YOLO on my Jetson Nano and apparently it is unable to switch on the webcam. However, the predictions all came out on the terminal but the webcam viewer is not appearing even though the light on the camera is switched on. The webcam I am using is C270 Logitech Webcam.

Another problem is that my other Jetson Nano is facing similar issues. However, in this case, the webcam does not switched on at all and no predictions on the terminal but an error message that says videostream stops which runs continuously without stopping.

Is there anyone who faced the same issue and is able to help me solve this problem? Hope there is a reply soon as I need it for my project now.

Thank you very much! Have a great day :D

What command do you use to run YOLO?

The command I used to run YOLO is:

./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights -c 0

Can you open the USB webcams with other ways?
e.g cheese, vlc ā€¦

And jetcam for Jetson would be a good way.

Hi! Yep the webcam is working as it works when we run using MobileNetV2.

Any messages at ā€˜darknet commandā€™?

This is what I got after running the command which is:

./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights


However, the webcam viewer is not appearing but itā€™s sensing the image by providing the results on the terminal screen as shown in the picture.

I think, OpenCV on your Jetson Nano has some problem.
Would you test your OpenCv with following simple Python code?

import cv2

img = cv2.imread('your-image.jpg', cv2.IMREAD_COLOR)

cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Hi, looks like its an error. Is there a link to tell me how to download the working OpenCV?

This error means the image file is empty, that is, wrong file name or wrong path.
Make sure that your image file is existing where you want and replace ā€˜your-image.jpgā€™ of imread.

Even JetPack already has pre-built OpenCV, if you want to re-install OpenCV,
sudo apt install python3-opencv

Hi! OpenCV is installed inside. So what else could have caused the original problem of the webcam?

No trouble about OpenCV at jetson, but youā€™d better test it works pretty well.

How about the small Python code?

Ya I tried it with the small Python Code that you sent. It was an error because I didnā€™t change the image filename. So with that code you gave me, it works.

Did you compile the Darknet correctly?
In Makefile

# Set these variable to 1:
GPU=1
CUDNN=1
OPENCV=1

# Uncomment the following line
# For Jetson TX1, Tegra X1, DRIVE CX, DRIVE PX - uncomment:
ARCH= -gencode arch=compute_53,code=[sm_53,compute_53]

# Replace NVCC path
NVCC=/usr/local/cuda/bin/nvcc

Maybe you can try different input methods for Darknet.

./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights ā€œv4l2src ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsinkā€ -ext_output -dont_show

Hi I have followed your instructions to edit the Makefile and I had the same error with video-stream stop as shown in the picture.

I had a similar problem with RPi HQ camera (itā€™s not USB however).

This command worked for me:

./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights ā€œnvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsinkā€

One more thing. I also installed OpenCV by this script:

It enables Cuda and Gstreamer. Before installing this, I remember having some issues in OpenCV.

Okay Iā€™ll try it out on Monday and get back to you by that day :D Thank you very much!

For the demo with webcam I use.

./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights /dev/video0

This would work if /dev/video0 is a UVC camera such as most USB webcams.

However, if you are using a RPi v2 camera, it is not, it is a bayer sensor providing RG10 format.

The command provided by @Edwin2087 will use a gstreamer pipeline reading from CSI and debayering with nvarguscamerasrc plugin, converting into BGRx format while resizing with hardware, then converting into BGR with videoconvert as expected by the app.