Hi!
Please, some look into my questions and try to give me some hints. I know 6 questions is a lot but please look and maybe you can help me with a few?
I think those questions and answers are something what some other people might find useful also.
For a 8 months I have been developing a solution to detect defects on white wooden details (or blanks) - have learned a lot about machine vision and about python and about Dusty’s jetson-inference. But i have had in mind some questions, maybe someone gives me some hints please.
- I’d like to use with Jetson Inference 1920x1080 and 60FPS with 4 cameras but getting this notice:
gstreamer] gstBufferManager – map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstBufferManager recieve caps: video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)30/1, format=(string)NV12
[gstreamer] gstBufferManager – recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1382407
Any ideas how to solve this issue? With 1280x720 and 30FPS and 4 cameras it’s working.
-
If i view Jetson Power GUI and run detectnet.py ( dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. (github.com)) then I dont see that engines like “dla0”, “dla1”, etc are used? All engines appears to be offline. What does it mean? I think if we use all the NX resources or not? (or this stats doesnt show right information)
-
In my script i use only detection and not saving or processing or any other things - but i see that GPU usage jumps from time to time to 80% - its strange (it’s not a problem, but I still think why it is so):
#lets configure AI network
net = jetson_inference.detectNet(argv=[‘–model=/home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx’, ‘–labels=/home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–confidence=0.7’, ‘–input-width=1980’, ‘–input-height=1080’, ‘–input-rate=60’])
#lets configure cameras
camera1 = jetson_utils.videoSource(“csi://0”) # select camera 1 - Capture a frame and return the cudaImage
camera2 = jetson_utils.videoSource(“csi://4”) # select camera 2 - Capture a frame and return the cudaImage
camera3 = jetson_utils.videoSource(“csi://2”) # select camera 3 - Capture a frame and return the cudaImage
camera4 = jetson_utils.videoSource(“csi://1”) # select camera 4 - Capture a frame and return the cudaImage
while(config.run == 1):
dsid += 1 #this tells which series of detection it is - if DSID is the same in multiple items, then it means it was found on same image set (images from multiple cameras)
start_time = time.time() #time now
now = datetime.now()
current_time = now.strftime("%H:%M:%S")
try:
img1 = camera1.Capture('rgba32f') #lets capture image from camera 1
bimg1 = camera1.Capture('rgba32f') #lets capture image from camera 1 for saving it to file later
except:
print("Camera 1 capture error")
if(config.detector == 1):
dcounter = 0 #lets reset detection counter
detections1 = net.Detect(img1, overlay="box,labels,conf") #overlay says how the defect is annotated on final image
for detection1 in detections1:
dcounter += 1 #lets add one to counter
mvcam = 1 #camera ID 1
dheight = int(detection1.Top)
dright = int(detection1.Right)
dleft = int(detection1.Left)
dbottom = int(detection1.Bottom)
dclassid = int(detection1.ClassID)
class_name = net.GetClassDesc(dclassid)
dconfidence = round(detection1.Confidence, 0) #lets use only integers (no point to be to percice)
if(dclassid!=99 or emulate_empty !=1): #save to array if that's not empty sight
filename = f'{image_folder}/{mvid}-1-{did}-{dsid}-{dclassid}.jpg'
filename_to_db = f'/data/defect/{today}/{mvid}-1-{did}-{dsid}-{dclassid}.jpg'
detection_array.append([mvid, mvcam, dsid, did, dcounter, dclassid, dconfidence, dleft, dheight, dright, dbottom, 0, 0, filename_to_db, "", img1, filename]) #lets add detection to array
array_members = len(detection_array) #lets find how many members are in array
if(array_members > 500): #lets delete if array is too big - dont know if 1000 is good number
detection_array.clear() #lets clear the array
if(dclassid==99 or emulate_empty == 1):
empty += 1
emptycounter += 1 #lets count how much emptys have we found
if(saveimage==1): #if we found something, lets save the picture also
try:
Mv_MakeFolders()
cudaDeviceSynchronize()
saveImageRGBA(filename, img1, 1280, 720)
saveimage=0
except:
print(current_time + ": error saving image 1")
continue
-
If i use “cmake -DENABLE_NVMM=off …/” what disadvantages does it give? I think i don’t do any other things besides detecting.
-
I use ONNX model. Is there any other scipts/solutions what I can use for detection in python? Basically I want to detect frame by frame only and to get output about coordinates, confident % and class ID?
-
Is there any good solution for testing camera parameters on Jetson and CSI cameras? I think brightness, saturation, resolution, etc and see realtime image? Also it could be fun to play with some application where I can switch on and off some Jetson VPI funtctions like “eroda” and “tilate”? For example those: VPI - Vision Programming Interface: Algorithms (nvidia.com)
-
At the moment I initialize one detection network and in the while loop I give it one frame from 4 cameras. Its the solution how I don’t use too much hardware resources. The question is - if i detect from different view angels defects - does it give any disadvantages? I mean - or i want to ask - is detectnet using somehow previous detections to detect better (does it learn realtime?)
-
The last question - Let’s imagine that we have 4 or even 6 x FullHD 60FPS images from 4 cameras - how to detect objects from there without not overloading hardware? Any hints how to speed up the process? I use 512x512 detectin, because some defects needed to detect are quite small.
-
Suggest how can I use NVENC0 and NVENC1 in python to compose a video realtime stream video? (lets assume that I have a frame: img1 = camera1.Capture(‘rgba32f’). Any examples?
-
Does SSD v1or its training (i use training what comes with jetson inference) some kind of augmentation? I think if it for example uses image histogram view or tilts or rotates or makes some kind of augmentation turing training and turing detecting?
-
Do you know baseboard for Jetson Orion or AGX where there are 4 or 6 CSI connectors?
And here is additonal log:
detectNet -- loading detection network model from:
-- prototxt NULL
-- model /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx
-- input_blob 'input_0'
-- output_cvg 'scores'
-- output_bbox 'boxes'
-- mean_pixel 0.000000
-- class_labels /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/labels.txt
-- class_colors NULL
-- threshold 0.700000
-- batch_size 1
[TRT] TensorRT version 8.4.1
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[TRT] Registered plugin creator - ::ProposalDynamic version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[TRT] Registered plugin creator - ::CoordConvAC version 1
[TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[TRT] detected model format - ONNX (extension '.onnx')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 320, GPU 4212 (MiB)
[TRT] [MemUsageChange] Init builder kernel library: CPU +131, GPU +124, now: CPU 470, GPU 4352 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] found engine cache file /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx.1.1.8401 .GPU.FP16.engine
[TRT] found model checksum /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx.sha256sum
[TRT] echo "$(cat /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx.sha256sum) /home/vis ioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx" | sha256sum --check --status
[TRT] model matched checksum /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx.sha256sum
[TRT] loading network plan from engine cache... /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobil enet.onnx.1.1.8401.GPU.FP16.engine
[TRT] device GPU, loaded /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 358, GPU 4368 (MiB)
[TRT] Loaded engine size: 17 MiB
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause err ors.
[TRT] Deserialization required 24516 microseconds.
[TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +16, now: CPU 0, GPU 16 (MiB)
[TRT] Total per-runner device persistent memory is 0
[TRT] Total per-runner host persistent memory is 76480
[TRT] Allocated activation device memory of size 13266944
[TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +13, now: CPU 0, GPU 29 (MiB)
[TRT] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEX PLICIT_BATCH flag. This function will always return 1.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 70
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 13266944
[TRT] -- bindings 3
[TRT] binding 0
-- index 0
-- name 'input_0'
-- type FP32
-- in/out INPUT
-- # dims 4
-- dim #0 1
-- dim #1 3
-- dim #2 512
-- dim #3 512
[TRT] binding 1
-- index 1
-- name 'scores'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 8190
-- dim #2 15
[TRT] binding 2
-- index 2
-- name 'boxes'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1
-- dim #1 8190
-- dim #2 4
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=512 w=512) size=3145728
[TRT] binding to output 0 scores binding index: 1
[TRT] binding to output 0 scores dims (b=1 c=8190 h=15 w=1) size=491400
[TRT] binding to output 1 boxes binding index: 2
[TRT] binding to output 1 boxes dims (b=1 c=8190 h=4 w=1) size=131040
[TRT]
[TRT] device GPU, /home/visioline/install/jetson-inference/python/training/detection/ssd/models/jw3/ssd-mobilenet.onnx initialized.
[TRT] detectNet -- number of object classes: 15
[TRT] detectNet -- maximum bounding boxes: 8190
[TRT] loaded 15 class labels
[TRT] detectNet -- number of object classes: 15
[TRT] loaded 0 class colors
[TRT] didn't load expected number of class colors (0 of 15)
[TRT] filling in remaining 15 class colors with default colors
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 saturation=2 ispdigitalgainrange='1 4' exposurecompensation=0 exposuretimerange='134000 158733000' ee -mode=2 ee-strength=1 gainrange='1 3' ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvi dconv flip-method=2 ! video/x-raw ! appsink name=mysink
GST_ARGUS: NvArgusCameraSrc: Setting ISP Digital Gain Range : '1 4'
GST_ARGUS: NvArgusCameraSrc: Setting Exposure Time Range : '134000 158733000'
GST_ARGUS: NvArgusCameraSrc: Setting Gain Range : '1 3'
[gstreamer] gstCamera successfully created device csi://0
[video] created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://0
- protocol: csi
- location: 0
-- deviceType: csi
-- ioType: input
-- codec: raw
-- width: 1280
-- height: 720
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: rotate-180
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[gstreamer] gstCamera -- attempting to create device csi://4
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=4 saturation=2 ispdigitalgainrange='1 4' exposurecompensation=0 exposuretimerange='134000 158733000' ee -mode=2 ee-strength=1 gainrange='1 3' ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvi dconv flip-method=2 ! video/x-raw ! appsink name=mysink
GST_ARGUS: NvArgusCameraSrc: Setting ISP Digital Gain Range : '1 4'
GST_ARGUS: NvArgusCameraSrc: Setting Exposure Time Range : '134000 158733000'
GST_ARGUS: NvArgusCameraSrc: Setting Gain Range : '1 3'
[gstreamer] gstCamera successfully created device csi://4
[video] created gstCamera from csi://4
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://4
- protocol: csi
- location: 4
- port: 4
-- deviceType: csi
-- ioType: input
-- codec: raw
-- width: 1280
-- height: 720
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: rotate-180
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[gstreamer] gstCamera -- attempting to create device csi://2
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=2 saturation=2 ispdigitalgainrange='1 4' exposurecompensation=0 exposuretimerange='134000 158733000' ee -mode=2 ee-strength=1 gainrange='1 3' ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvi dconv flip-method=2 ! video/x-raw ! appsink name=mysink
GST_ARGUS: NvArgusCameraSrc: Setting ISP Digital Gain Range : '1 4'
GST_ARGUS: NvArgusCameraSrc: Setting Exposure Time Range : '134000 158733000'
GST_ARGUS: NvArgusCameraSrc: Setting Gain Range : '1 3'
[gstreamer] gstCamera successfully created device csi://2
[video] created gstCamera from csi://2
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://2
- protocol: csi
- location: 2
- port: 2
-- deviceType: csi
-- ioType: input
-- codec: raw
-- width: 1280
-- height: 720
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: rotate-180
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[gstreamer] gstCamera -- attempting to create device csi://1
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=1 saturation=2 ispdigitalgainrange='1 4' exposurecompensation=0 exposuretimerange='134000 158733000' ee -mode=2 ee-strength=1 gainrange='1 3' ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvi dconv flip-method=2 ! video/x-raw ! appsink name=mysink
GST_ARGUS: NvArgusCameraSrc: Setting ISP Digital Gain Range : '1 4'
GST_ARGUS: NvArgusCameraSrc: Setting Exposure Time Range : '134000 158733000'
GST_ARGUS: NvArgusCameraSrc: Setting Gain Range : '1 3'
[gstreamer] gstCamera successfully created device csi://1
[video] created gstCamera from csi://1
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://1
- protocol: csi
- location: 1
- port: 1
-- deviceType: csi
-- ioType: input
-- codec: raw
-- width: 1280
-- height: 720
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: rotate-180
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
Tue Jan 24 13:20:15 2023 MV Web Server Starts - 10.199.1.178:8069
GST_ARGUS: Creating output stream
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline0
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 1
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstBufferManager recieve caps: video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)30/1, format=(string)NV12
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
RingBuffer -- allocated 4 buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer message warning ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (14745600 bytes each, 58982400 bytes total)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter3
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter2
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc1
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter3
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter2
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc1
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline1
[gstreamer] gstreamer message new-clock ==> pipeline1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc1
GST_ARGUS: Creating output stream
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline1
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: Running with following settings:
Camera index = 4
Camera mode = 1
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstBufferManager recieve caps: video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)30/1, format=(string)NV12
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
RingBuffer -- allocated 4 buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline1
[gstreamer] gstreamer message warning ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline1
RingBuffer -- allocated 4 buffers (14745600 bytes each, 58982400 bytes total)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter5
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv2
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter4
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc2
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline2
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter5
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv2
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter4
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc2
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline2
[gstreamer] gstreamer message new-clock ==> pipeline2
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter5
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter4
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc2
[gstreamer] gstreamer message stream-start ==> pipeline2
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: Running with following settings:
Camera index = 2
Camera mode = 1
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstBufferManager recieve caps: video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)30/1, format=(string)NV12
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
RingBuffer -- allocated 4 buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline2
[gstreamer] gstreamer message warning ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline2
RingBuffer -- allocated 4 buffers (14745600 bytes each, 58982400 bytes total)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter7
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv3
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter6
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc3
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline3
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter7
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv3
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter6
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc3
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline3
[gstreamer] gstreamer message new-clock ==> pipeline3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter7
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter6
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc3
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline3
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 6 83709000;
GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 1
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstBufferManager recieve caps: video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)30/1, format=(string)NV12
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
RingBuffer -- allocated 4 buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline3
[gstreamer] gstreamer message warning ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline3
RingBuffer -- allocated 4 buffers (14745600 bytes each, 58982400 bytes total)