I have some IMX219 based cameras whose video is upside down by default. This image shows the camera without bending the cable, it’s video is upside down in this orientation.
This image shows how I have to bend the csi cable in order to orient the camera so the video is right side up.
Instead of bending and ruining the csi ribbon I would like to use the options to rotate the video. These are my video input options when I’m creating the video streams using jetson inference.
videoOptions options;
options.resource = URI("csi://0");
options.resource.protocol = "csi";
options.resource.location = "0";
options.deviceType = videoOptions::DeviceType::DEVICE_CSI;
options.ioType = videoOptions::IoType::INPUT;
options.width = 1280; // 1280 for imx219-83
options.height = 720; // 720 for imx219-83
options.frameRate = 30;
options.numBuffers = 4;
options.zeroCopy = true;
options.flipMethod = videoOptions::FlipMethod::FLIP_ROTATE_180;
input = videoSource::Create(options);
and here are my video output options
videoOptions options;
options.resource = base_path + file_name;
options.resource.protocol = "file";
options.resource.location = file_name;
options.deviceType = videoOptions::DeviceType::DEVICE_FILE;
options.ioType = videoOptions::IoType::OUTPUT;
options.codec = videoOptions::Codec::CODEC_H264;
options.codecType = videoOptions::CodecType::CODEC_OMX;
options.width = 1280;
options.height = 720;
options.frameRate = 30;
options.zeroCopy = true;
options.flipMethod = videoOptions::FlipMethod::FLIP_ROTATE_180;
output_vid_file = videoOutput::Create(options);
When I use the FLIP_NONE
option everything works fine, but my video is upside down without bending the cable. When I use the FLIP_ROTATE_180
option this is the output I see
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw(memory:NVMM) ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video] created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://0
- protocol: csi
- location: 0
-- deviceType: csi
-- ioType: input
-- width: 1280
-- height: 720
-- frameRate: 30
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
------------------------------------------------
[gstreamer] gstEncoder -- pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! omxh264enc name=encoder bitrate=4000000 ! video/x-h264 ! h264parse ! qtmux ! filesink location=../data/output_5.mp4
[video] created gstEncoder from file:///home/crose72/Documents/GitHub/OperationSquirrel/SquirrelDefender/data/../data/output_5.mp4
------------------------------------------------
gstEncoder video options:
------------------------------------------------
-- URI: file:///home/crose72/Documents/GitHub/OperationSquirrel/SquirrelDefender/data/../data/output_5.mp4
- protocol: file
- location: ../data/output_5.mp4
- extension: mp4
-- deviceType: file
-- ioType: output
-- codec: H264
-- codecType: omx
-- width: 1280
-- height: 720
-- frameRate: 30
-- bitRate: 4000000
-- numBuffers: 4
-- zeroCopy: true
------------------------------------------------
detectNet -- loading detection network model from:
-- model ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
-- input_blob 'Input'
-- output_blob 'NMS'
-- output_count 'NMS_1'
-- class_labels ../networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
-- threshold 0.500000
-- batch_size 1
[TRT] TensorRT version 8.2.1
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension '.uff')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +229, GPU +0, now: CPU 252, GPU 3188 (MiB)
[TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 252 MiB, GPU 3217 MiB
[TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 282 MiB, GPU 3247 MiB
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] found engine cache file ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8201.GPU.FP16.engine
[TRT] found model checksum ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum
[TRT] echo "$(cat ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum) ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff" | sha256sum --check --status
[TRT] model matched checksum ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum
[TRT] loading network plan from engine cache... ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8201.GPU.FP16.engine
[TRT] device GPU, loaded ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 286, GPU 3351 (MiB)
[TRT] Loaded engine size: 33 MiB
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT] Using cublas as a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +144, now: CPU 463, GPU 3501 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +241, GPU +154, now: CPU 704, GPU 3655 (MiB)
[TRT] Deserialization required 3508248 microseconds.
[TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +33, now: CPU 0, GPU 33 (MiB)
[TRT] Using cublas as a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 704, GPU 3655 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 704, GPU 3655 (MiB)
[TRT] Total per-runner device persistent memory is 22143488
[TRT] Total per-runner host persistent memory is 135488
[TRT] Allocated activation device memory of size 13360640
[TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +34, now: CPU 0, GPU 67 (MiB)
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 124
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 13360640
[TRT] -- bindings 3
[TRT] binding 0
-- index 0
-- name 'Input'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3
-- dim #1 300
-- dim #2 300
[TRT] binding 1
-- index 1
-- name 'NMS'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 100
-- dim #2 7
[TRT] binding 2
-- index 2
-- name 'NMS_1'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 1
-- dim #2 1
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT] device GPU, ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet -- maximum bounding boxes: 100
[TRT] loaded 91 class labels
[TRT] detectNet -- number of object classes: 91
[TRT] loaded 0 class colors
[TRT] didn't load expected number of class colors (0 of 91)
[TRT] filling in remaining 91 class colors with default colors
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 5
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 120.000005
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager recieve caps: video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1008
[gstreamer] gstBufferManager -- recieved NVMM memory
[cuda] allocated 4 ring buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[cuda] allocated 4 ring buffers (2764800 bytes each, 11059200 bytes total)
nvbuf_utils: dmabuf_fd 1221 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed
[tracker] added track -1 -> class=1
[cuda] allocated 2 ring buffers (1382400 bytes each, 2764800 bytes total)
[gstreamer] gstEncoder -- starting pipeline, transitioning to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> filesink0
[gstreamer] gstreamer changed state from NULL to READY ==> qtmux0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter2
[gstreamer] gstreamer changed state from NULL to READY ==> encoder
[gstreamer] gstreamer changed state from NULL to READY ==> mysource
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline1
[gstreamer] gstreamer changed state from READY to PAUSED ==> qtmux0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter2
[gstreamer] gstreamer changed state from READY to PAUSED ==> encoder
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysource
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline1
[gstreamer] gstreamer message new-clock ==> pipeline1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> qtmux0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> encoder
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysource
[gstreamer] gstEncoder -- new caps: video/x-raw, width=1280, height=720, format=(string)I420, framerate=30/1
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 40
NVMEDIA_ENC: bBlitMode is set to TRUE
[tracker] updated track -1 -> class=1 status=0 frames=1
[tracker] added track -1 -> class=1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline1
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesink0
[gstreamer] gstreamer message async-done ==> pipeline1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> filesink0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline1
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[tracker] updated track -1 -> class=1 status=0 frames=2
[tracker] updated track 0 -> class=0 status=0 frames=0
[tracker] added track -1 -> class=1
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[tracker] updated track -1 -> class=1 status=0 frames=3
[tracker] updated track 0 -> class=0 status=0 frames=0
[tracker] updated track 0 -> class=0 status=0 frames=0
[tracker] added track -1 -> class=1
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
And to point out the main issue of this log it’s
nvbuf_utils: dmabuf_fd 1221 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed
and the timeout between images. How do I resolve this? For the record I’ve tried to rotate both the input and output streams, as well as make zeroCopy: false
but those have not worked either.
Tagging @dusty_nv