Utensils Tutorial Imagenet: Failed to capture next frame

Hello ALL,

I am trying to follow this tutorial:

I get all the way to the end where you run your model and the camera is supposed to pop up if it works, but it does not. It just says in red in the console “imagenet: failed to capture next frame” . here is the argument that I am supplying:

imagenet-camera --model=utensils/resnet18.onnx --labels=/home/lord_de_seis/datasets/utensils/labels.txt --camera=/dev/video0 --width=640 --height=480 --input_blob=input_0 --output_blob=output_0

I just as a test ran imagenet-camera with no parameters and it pops up just fine. In order for me to run the camera capture, I had to use this:

camera-capture nvgstcapture-1.0

because the one it lists would not work.

Any ideas? I am using the 4GB jetson nano

Hi @Sleeping_Dwarf, are you using a MIPI CSI camera? If so, can you use --camera=0 or --camera=csi://0 instead?

If you still have issues running it, please post the terminal log from the console. Thanks!

here is what console says:

lord_de_seis@LordDeSeis:~/jetson-inference/python/training/classification$ imagenet-camera --model=utensils/resnet18.onnx --labels=/home/lord_de_seis/datasets/utensils/labels.txt --camera=/dev/video0 --width=640 --height=480 --input_blob=input_0 --output_blob=output_0
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device v4l2:///dev/video0

(imagenet-camera:29495): GStreamer-CRITICAL **: 17:40:45.432: gst_element_message_full_with_details: assertion ‘GST_IS_ELEMENT (element)’ failed

(imagenet-camera:29495): GStreamer-CRITICAL **: 17:40:45.432: gst_element_message_full_with_details: assertion ‘GST_IS_ELEMENT (element)’ failed
[gstreamer] gstCamera – didn’t discover any v4l2 devices
[gstreamer] gstCamera – device discovery failed, but /dev/video0 exists
[gstreamer] support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0

gstCamera video options:

– URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
– deviceType: v4l2
– ioType: input
– codec: unknown
– width: 640
– height: 480
– frameRate: 30.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

[OpenGL] glDisplay – X screen 0 resolution: 1920x1080
[OpenGL] glDisplay – X window resolution: 640x480
[OpenGL] glDisplay – display device initialized (640x480)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 640
– height: 480
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0

imageNet – loading classification network model from:
– prototxt (null)
– model utensils/resnet18.onnx
– class_labels /home/lord_de_seis/datasets/utensils/labels.txt
– input_blob ‘input_0’
– output_blob ‘output_0’
– batch_size 1

[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file utensils/resnet18.onnx.1.1.7103.GPU.FP16.engine
[TRT] loading network plan from engine cache… utensils/resnet18.onnx.1.1.7103.GPU.FP16.engine
[TRT] device GPU, loaded utensils/resnet18.onnx
[TRT] Deserialize required 3154182 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 29
[TRT] – maxBatchSize 1
[TRT] – workspace 0
[TRT] – deviceMemory 29827072
[TRT] – bindings 2
[TRT] binding 0
– index 0
– name ‘input_0’
– type FP32
– in/out INPUT
– # dims 4
– dim #0 1 (SPATIAL)
– dim #1 3 (SPATIAL)
– dim #2 224 (SPATIAL)
– dim #3 224 (SPATIAL)
[TRT] binding 1
– index 1
– name ‘output_0’
– type FP32
– in/out OUTPUT
– # dims 2
– dim #0 1 (SPATIAL)
– dim #1 3 (SPATIAL)
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 output_0 binding index: 1
[TRT] binding to output 0 output_0 dims (b=1 c=3 h=1 w=1) size=12
[TRT]
[TRT] device GPU, utensils/resnet18.onnx initialized.
[TRT] imageNet – loaded 3 class info entries
[TRT] imageNet – utensils/resnet18.onnx initialized.
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstCamera – end of stream (EOS)
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
imagenet: failed to capture next frame
^Creceived SIGINT
imagenet: failed to capture next frame
imagenet: shutting down…
[gstreamer] gstCamera – stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera – pipeline stopped
imagenet: shutdown complete.
lord_de_seis@LordDeSeis:~/jetson-inference/python/training/classification$

I am using a MIPI camera and it is video 0. Yes changing it to :

:~/jetson-inference/python/training/classification$ imagenet-camera --model=utensils/resnet18.onnx --labels=/home/lord_de_seis/datasets/utensils/labels.txt --camera=0 --width=640 --height=480 --input_blob=input_0 --output_blob=output_0

FIXED IT!! THANKYOU

OK, great! Just FYI, that tutorial was written against a slightly older version of jetson-inference - and while the command lines are backwards-compatible, the updated command line would be like so:

$ imagenet --model=utensils/resnet18.onnx --labels=/home/lord_de_seis/datasets/utensils/labels.txt --input-width=640 --input-height=480 --input_blob=input_0 --output_blob=output_0 csi://0

This newer command format would allow you to have alternative inputs and outputs, like video files, directories of images, RTP/RTSP streams, ect. To find out more, you can see this document: