Failed to enable a V4l2 device with imagenet-camera

Hi

Does imagenet-camera work with a V4l2 device on TX2(R28.1)?

We tried to use a V4l2 yuv camera and got error-“failed to open camera for streaming.”

We modified the DEFAULT_CAMERA to 0 and set gstreamer decoder pipeline string as teh following:
v4l2src device=/dev/video0" ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! nvvidconv ! video/x-raw(memory:NVMM) ! appsink name=mysink

Thank you for any advice.

Here is the error log.
nvidia@tegra-ubuntu:~/TR/jetson-inference/build/aarch64/bin$ ./imagenet-camera
imagenet-camera
args (1): 0 [./imagenet-camera]

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
v4l2src device=/dev/video0" ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! nvvidconv ! video/x-raw(memory:NVMM) ! appsink name=mysink

imagenet-camera: successfully initialized video device
width: 1920
height: 1080
depth: 24 (bpp)

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[GIE] TensorRT version 2.1, build 2102
[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] loading network profile from cache… networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/bvlc_googlenet.caffemodel loaded
[GIE] CUDA engine context initialized with 2 bindings
[GIE] networks/bvlc_googlenet.caffemodel input binding index: 0
[GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x102a00000 GPU 0x102a00000
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x102c00000 GPU 0x102c00000
networks/bvlc_googlenet.caffemodel initialized.
[GIE] networks/bvlc_googlenet.caffemodel loaded
imageNet – loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
default X screen 0: 1920 x 1200
[OpenGL] glDisplay display window initialized
[OpenGL] creating 1920x1080 texture
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x102e00000 GPU 0x102e00000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x102c02000 GPU 0x102c02000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer failed to set pipeline state to PLAYING (error 0)

imagenet-camera: failed to open camera for streaming

Thank you,

Hi,

Could you check if your v4l2 camera is indeed mounted at /dev/video0?
Sometimes the v4l2 camera is located at /dev/video1.

Thanks.

Hi AastaLLL,

Thanks for your prompt reply.

Yes. it’s at /dev/video0.
The /dev/video0 preview was ok by using the same gstreamer pipeline string.

Thank you,

Hi,

We just double checked jetson_infernce with a Logitech webcam.
We can open camera without issue. Here is our changes:

diff --git a/imagenet-camera/imagenet-camera.cpp b/imagenet-camera/imagenet-camera.cpp
index 24e25c6..fcdf8cb 100644
--- a/imagenet-camera/imagenet-camera.cpp
+++ b/imagenet-camera/imagenet-camera.cpp
@@ -34,7 +34,7 @@
 #include "imageNet.h"
 
 
-#define DEFAULT_CAMERA -1	// -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0)	
+#define DEFAULT_CAMERA 1	// -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0)

Could you clean build the project and try it again?
Thanks.

Hi AastaLLL,

Thank you for your support.
It works now.

Plus, does Face-recognition support a v4l2 device?

On the same Tx2, face-recognition failed to run with a v4l2 device and showed “cudaPreImageNetMean failed”.

Here is the error log.
nvidia@tegra-ubuntu:~/Face-Recognition/build/aarch64/bin$ ./face-recognition
Building and running a GPU inference engine for /home/nvidia/Face-Recognition/data/deploy.prototxt, N=1…
[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! videoconvert ! video/x-raw, format=RGB ! videoconvert ! appsink name=mysink
successfully initialized video device
width: 1920
height: 1080
depth: 24 (bpp)

Bindings after deserializing:
Binding 0 (data): Input.
Binding 1 (coverage_fd): Output.
Binding 2 (bboxes_fd): Output.
Binding 3 (count_fd): Output.
Binding 4 (bbox_fr): Output.
Binding 5 (bbox_id): Output.
Binding 6 (softmax_fr): Output.
Binding 7 (label): Output.
loaded image /home/nvidia/Face-Recognition/data/fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x102a00000 GPU 0x102a00000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x102c00000 GPU 0x102c00000
default X screen 0: 1920 x 1200
[OpenGL] glDisplay display window initialized
[OpenGL] creating 1920x1080 texture
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg stream-start ==> pipeline0
Allocate memory: input blob
Allocate memory: coverage
Allocate memory: box
Allocate memory: count
Allocate memory: selected bbox
Allocate memory: selected index
Allocate memory: softmax
Allocate memory: label
failed to capture frame
failed to convert from NV12 to RGBA
[cuda] cudaPreImageNetMean((float4*)imgRGBA, camera->GetWidth(), camera->GetHeight(), data, dimsData.w(), dimsData.h(), make_float3(127.0f, 127.0f, 127.0f))
[cuda] invalid device pointer (error 17) (hex 0x11)
[cuda] /home/nvidia/Face-Recognition/face-recognition/face-recognition.cpp:224
cudaPreImageNetMean failed

Thank you,

Hi,

Yes. Face recognition doesn’t support the v4l2 camera.
This sample targets for TensorRT plugin layer demonstration. We don’t enable it with v4l2 camera support.

Please refer to jeston_inference for the TensorRT+v4l2 camera use-case.
Thanks.

Hi AastaLLL,

Thank you for your information.
Will jeston_inference include Face recognition function in the future version?

Thank you,

Hi,

We don’t have such a plan.
Sorry for the inconvenience.

Thank you.