Hello Team,
I am currently working with the NVIDIA Jetson AGX Orin Industrial SOM on a custom board and encountering an issue when running detectnet.py
with a CSI camera. I have followed the installation steps from the official Jetson Inference repository.
JetPack Version: JetPack 5.1.2 (L4T R35.5.0)
$ cat /etc/nv_tegra_release
# R35 (release), REVISION: 5.0, GCID: 35550185, BOARD: t186ref, EABI: aarch64, DATE: Tue Feb 20 04:46:31 UTC 2024
Detected Cameras:
$ v4l2-ctl --list-devices
NVIDIA Tegra Video Input Device (platform:tegra-camrtc-ca):
/dev/media0
vi-output, ar0230 30-0043 (platform:tegra-capture-vi:0):
/dev/video0
vi-output, ar0230 30-0044 (platform:tegra-capture-vi:1):
/dev/video1
vi-output, ar0230 31-0043 (platform:tegra-capture-vi:2):
/dev/video2
vi-output, ar0230 31-0044 (platform:tegra-capture-vi:3):
/dev/video3
vi-output, ar0230 32-0043 (platform:tegra-capture-vi:4):
/dev/video4
vi-output, ar0230 32-0044 (platform:tegra-capture-vi:5):
/dev/video5
Issue:
When I try to run detectnet.py
with the camera:
$ python3 detectnet.py --camera=/dev/video0
I see the following logs:
/dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:759 No cameras available
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
Please find the full log below
orinsys1@tegra-ubuntu:~/jetson-inference/python/examples$ python3 detectnet.py --camera=/dev/video0
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video] created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://0
- protocol: csi
- location: 0
-- deviceType: csi
-- ioType: input
-- width: 1280
-- height: 720
-- frameRate: 30
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: rotate-180
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 2560x1440
[OpenGL] glDisplay -- X window resolution: 2560x1440
[OpenGL] glDisplay -- display device initialized (2560x1440)
[video] created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://0
- protocol: display
- location: 0
-- deviceType: display
-- ioType: output
-- width: 2560
-- height: 1440
-- frameRate: 0
-- numBuffers: 4
-- zeroCopy: true
------------------------------------------------
detectNet -- loading detection network model from:
-- model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
-- input_blob 'Input'
-- output_blob 'NMS'
-- output_count 'NMS_1'
-- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
-- threshold 0.500000
-- batch_size 1
[TRT] TensorRT version 8.5.2
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::CoordConvAC version 1
[TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::GroupNorm version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[TRT] Registered plugin creator - ::LayerNorm version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::ProposalDynamic version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::ROIAlign_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::SeqLen2Spatial version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::SplitGeLU version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension '.uff')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +221, GPU +0, now: CPU 265, GPU 6412 (MiB)
[TRT] Trying to load shared library libnvinfer_builder_resource.so.8.5.2
[TRT] Loaded shared library libnvinfer_builder_resource.so.8.5.2
[TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +433, now: CPU 589, GPU 6867 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] found engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8502.GPU.FP16.engine
[TRT] found model checksum /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum
[TRT] echo "$(cat /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum) /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff" | sha256sum --check --status
[TRT] model matched checksum /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum
[TRT] loading network plan from engine cache... /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8502.GPU.FP16.engine
[TRT] device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] Loaded engine size: 33 MiB
[TRT] Trying to load shared library libcublas.so.11
[TRT] Loaded shared library libcublas.so.11
[TRT] Using cublas as plugin tactic source
[TRT] Trying to load shared library libcublasLt.so.11
[TRT] Loaded shared library libcublasLt.so.11
[TRT] Using cublasLt as core library tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +576, now: CPU 879, GPU 7480 (MiB)
[TRT] Trying to load shared library libcudnn.so.8
[TRT] Loaded shared library libcudnn.so.8
[TRT] Using cuDNN as plugin tactic source
[TRT] Using cuDNN as core library tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +83, GPU +96, now: CPU 962, GPU 7576 (MiB)
[TRT] Deserialization required 4150780 microseconds.
[TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +32, now: CPU 0, GPU 32 (MiB)
[TRT] Trying to load shared library libcublas.so.11
[TRT] Loaded shared library libcublas.so.11
[TRT] Using cublas as plugin tactic source
[TRT] Trying to load shared library libcublasLt.so.11
[TRT] Loaded shared library libcublasLt.so.11
[TRT] Using cublasLt as core library tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +4, now: CPU 962, GPU 7595 (MiB)
[TRT] Trying to load shared library libcudnn.so.8
[TRT] Loaded shared library libcudnn.so.8
[TRT] Using cuDNN as plugin tactic source
[TRT] Using cuDNN as core library tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 962, GPU 7595 (MiB)
[TRT] Total per-runner device persistent memory is 4096
[TRT] Total per-runner host persistent memory is 155888
[TRT] Allocated activation device memory of size 7000064
[TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +7, now: CPU 0, GPU 39 (MiB)
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 108
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 7000064
[TRT] -- bindings 3
[TRT] binding 0
-- index 0
-- name 'Input'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3
-- dim #1 300
-- dim #2 300
[TRT] binding 1
-- index 1
-- name 'NMS'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 100
-- dim #2 7
[TRT] binding 2
-- index 2
-- name 'NMS_1'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 1
-- dim #2 1
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT] device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet -- maximum bounding boxes: 100
[TRT] loaded 91 class labels
[TRT] detectNet -- number of object classes: 91
[TRT] loaded 0 class colors
[TRT] didn't load expected number of class colors (0 of 91)
[TRT] filling in remaining 91 class colors with default colors
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:759 No cameras available
[gstreamer] gstCamera -- end of stream (EOS)
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer message warning ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstreamer pipeline0 recieved EOS signal...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
^C^C[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
Traceback (most recent call last):
File "detectnet.py", line 64, in <module>
img = input.Capture()
KeyboardInterrupt
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
orinsys1@tegra-ubuntu:~/jetson-inference/python/examples$ ^C
orinsys1@tegra-ubuntu:~/jetson-inference/python/examples$
Could you please help us understand the possible causes of this issue? Additionally, we would appreciate any steps or recommendations to resolve it.
Looking forward to your guidance.
Regards,
Parashuram