DetectNet camera pipeline not working

I used nvidia jetson-inference detectnet-camera example and thought that USB cameras should work out of the box, however I cannot use my and error logs don’t specify exact problem.

My USB camera info:

nvidia@tegra-ubuntu:~/Downloads/jetson-inference/build/aarch64/bin$ v4l2-ctl -V
Format Video Capture:
	Width/Height      : 640/512
	Pixel Format      : 'RGB3'
	Field             : None
	Bytes per Line    : 1920
	Size Image        : 983040
	Colorspace        : sRGB
	Transfer Function : Default
	YCbCr Encoding    : Default
	Quantization      : Default
	Flags

Output from running jetson inference. The GUI actually displays the first frame form camera but then doesn’t update it anymore.

nvidia@tegra-ubuntu:~/Downloads/jetson-inference/build/aarch64/bin$ ./detectnet-camera --prototxt=$NET/deploy.prototxt --mod
el=$NET/snapshot_iter_6900.caffemodel
detectnet-camera
  args (3):  0 [./detectnet-camera]  1 [--prototxt=/home/nvidia/Downloads/Viedsargs_caffe/deploy.prototxt]  2 [--model=/home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel]  

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)640, height=(int)512, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink

detectnet-camera:  successfully initialized video device
    width:  640
   height:  512
    depth:  24 (bpp)


detectNet -- loading detection network model from:
          -- prototxt    /home/nvidia/Downloads/Viedsargs_caffe/deploy.prototxt
          -- model       /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[GIE]  TensorRT version 2.1.2, build 2102
[GIE]  attempting to open cache file /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel.2.tensorcache
[GIE]  loading network profile from cache... /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel.2.tensorcache
[GIE]  platform has FP16 support.
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel loaded
[GIE]  CUDA engine context initialized with 3 bindings
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel input  binding index:  0
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel input  dims (b=2 c=3 h=2048 w=2560) size=125829120
[cuda]  cudaAllocMapped 125829120 bytes, CPU 0x102a00000 GPU 0x102a00000
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel output 0 coverage  binding index:  1
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel output 0 coverage  dims (b=2 c=1 h=128 w=160) size=163840
[cuda]  cudaAllocMapped 163840 bytes, CPU 0x10a200000 GPU 0x10a200000
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel output 1 bboxes  binding index:  2
[GIE]  /home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel output 1 bboxes  dims (b=2 c=4 h=128 w=160) size=655360
[cuda]  cudaAllocMapped 655360 bytes, CPU 0x10a400000 GPU 0x10a400000
/home/nvidia/Downloads/Viedsargs_caffe/snapshot_iter_6900.caffemodel initialized.
[cuda]  cudaAllocMapped 16 bytes, CPU 0x10a600000 GPU 0x10a600000
maximum bounding boxes:  81920
[cuda]  cudaAllocMapped 1310720 bytes, CPU 0x10a4a0000 GPU 0x10a4a0000
[cuda]  cudaAllocMapped 327680 bytes, CPU 0x10a800000 GPU 0x10a800000
default X screen 0:   1200 x 600
[OpenGL]  glDisplay display window initialized
[OpenGL]   creating 640x512 texture
loaded image  fontmapA.png  (256 x 512)  2097152 bytes
[cuda]  cudaAllocMapped 2097152 bytes, CPU 0x10aa00000 GPU 0x10aa00000
[cuda]  cudaAllocMapped 8192 bytes, CPU 0x10ac00000 GPU 0x10ac00000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer decoder onPreroll
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10a850000 GPU 0x10a850000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10ae00000 GPU 0x10ae00000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10aef0000 GPU 0x10aef0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b000000 GPU 0x10b000000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b0f0000 GPU 0x10b0f0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b200000 GPU 0x10b200000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b2f0000 GPU 0x10b2f0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b400000 GPU 0x10b400000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b4f0000 GPU 0x10b4f0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b600000 GPU 0x10b600000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b6f0000 GPU 0x10b6f0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b800000 GPU 0x10b800000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10b8f0000 GPU 0x10b8f0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10ba00000 GPU 0x10ba00000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10baf0000 GPU 0x10baf0000
[cuda]  cudaAllocMapped 983040 bytes, CPU 0x10bc00000 GPU 0x10bc00000
[cuda]   gstreamer camera -- allocated 16 ringbuffers, 983040 bytes each
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer msg async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0

detectnet-camera:  camera open for streaming
[cuda]   gstreamer camera -- allocated 16 RGBA ringbuffers
[gstreamer] gstreamer decoder onEOS
0 bounding boxes detected
[cuda]   registered 5242880 byte openGL texture for interop access (640x512)

detectnet-camera:  failed to capture frame
detectnet-camera:  failed to convert from NV12 to RGBA
detectNet::Detect( 0x(nil), 640, 512 ) -> invalid parameters
[cuda]   cudaNormalizeRGBA((float4*)imgRGBA, make_float2(0.0f, 255.0f), (float4*)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight())
[cuda]      invalid device pointer (error 17) (hex 0x11)
[cuda]      /home/nvidia/Downloads/jetson-inference/detectnet-camera/detectnet-camera.cpp:247

Could it be issue with gstCamera-> Capture function?
But I’m not c++ programmer so I don’t really know how to debug it /fix it. The problem I found was that “wait” event returned false.

Hi eduards.slava, from this line of the console log, it appears that GStreamer has a problem capturing the frame from your USB camera:

detectnet-camera:  failed to capture frame

Are you able to stream the camera reliably in other applications, like cheese for example?

With this command I’m able to get stream:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,
width=640, height=512, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert ! xvimagesink

I also tested commenting out “return false” in gstCamera.cpp and in GUI it displayed a frame from the camera. But it was just one static image which was not updated aftwerwards.

Maybe this is important: the camera is thermal camera and to get it as v4l2 device I run a script in parralel. This ensures I get normal stream. For some reason “cheese” doesn’t find device but vlc and gstreamer did /dev/video0

// Capture
bool gstCamera::Capture( void** cpu, void** cuda, unsigned long timeout )
{
	mWaitMutex->lock();
    const bool wait_result = mWaitEvent->wait(mWaitMutex, timeout);
    mWaitMutex->unlock();
	
	//if( !wait_result )
	//	return false;
	
	mRingMutex->lock();
	const uint32_t latest = mLatestRingbuffer;
	const bool retrieved = mLatestRetrieved;
	mLatestRetrieved = true;
	mRingMutex->unlock();
	
	// skip if it was already retrieved
	//if( retrieved )
	//	return false;
	
	if( cpu != NULL )
		*cpu = mRingbufferCPU[latest];
	
	if( cuda != NULL )
		*cuda = mRingbufferGPU[latest];
	
	return true;
}