Does imagenet-camera work with a V4l2 device on TX2(R28.1)?
We tried to use a V4l2 yuv camera and got error-“failed to open camera for streaming.”
We modified the DEFAULT_CAMERA to 0 and set gstreamer decoder pipeline string as teh following:
v4l2src device=/dev/video0" ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! nvvidconv ! video/x-raw(memory:NVMM) ! appsink name=mysink
Thank you for any advice.
Here is the error log.
nvidia@tegra-ubuntu:~/TR/jetson-inference/build/aarch64/bin$ ./imagenet-camera
imagenet-camera
args (1): 0 [./imagenet-camera]
We just double checked jetson_infernce with a Logitech webcam.
We can open camera without issue. Here is our changes:
diff --git a/imagenet-camera/imagenet-camera.cpp b/imagenet-camera/imagenet-camera.cpp
index 24e25c6..fcdf8cb 100644
--- a/imagenet-camera/imagenet-camera.cpp
+++ b/imagenet-camera/imagenet-camera.cpp
@@ -34,7 +34,7 @@
#include "imageNet.h"
-#define DEFAULT_CAMERA -1 // -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0)
+#define DEFAULT_CAMERA 1 // -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0)
Could you clean build the project and try it again?
Thanks.
Plus, does Face-recognition support a v4l2 device?
On the same Tx2, face-recognition failed to run with a v4l2 device and showed “cudaPreImageNetMean failed”.
Here is the error log.
nvidia@tegra-ubuntu:~/Face-Recognition/build/aarch64/bin$ ./face-recognition
Building and running a GPU inference engine for /home/nvidia/Face-Recognition/data/deploy.prototxt, N=1…
[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! videoconvert ! video/x-raw, format=RGB ! videoconvert ! appsink name=mysink
successfully initialized video device
width: 1920
height: 1080
depth: 24 (bpp)
Bindings after deserializing:
Binding 0 (data): Input.
Binding 1 (coverage_fd): Output.
Binding 2 (bboxes_fd): Output.
Binding 3 (count_fd): Output.
Binding 4 (bbox_fr): Output.
Binding 5 (bbox_id): Output.
Binding 6 (softmax_fr): Output.
Binding 7 (label): Output.
loaded image /home/nvidia/Face-Recognition/data/fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x102a00000 GPU 0x102a00000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x102c00000 GPU 0x102c00000
default X screen 0: 1920 x 1200
[OpenGL] glDisplay display window initialized
[OpenGL] creating 1920x1080 texture
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg stream-start ==> pipeline0
Allocate memory: input blob
Allocate memory: coverage
Allocate memory: box
Allocate memory: count
Allocate memory: selected bbox
Allocate memory: selected index
Allocate memory: softmax
Allocate memory: label
failed to capture frame
failed to convert from NV12 to RGBA
[cuda] cudaPreImageNetMean((float4*)imgRGBA, camera->GetWidth(), camera->GetHeight(), data, dimsData.w(), dimsData.h(), make_float3(127.0f, 127.0f, 127.0f))
[cuda] invalid device pointer (error 17) (hex 0x11)
[cuda] /home/nvidia/Face-Recognition/face-recognition/face-recognition.cpp:224
cudaPreImageNetMean failed
Yes. Face recognition doesn’t support the v4l2 camera.
This sample targets for TensorRT plugin layer demonstration. We don’t enable it with v4l2 camera support.
Please refer to jeston_inference for the TensorRT+v4l2 camera use-case.
Thanks.