Hi, so I have been messing around my code for a couple of days and here are my findings so far.
@DaneLLL Here are the tegrastats during the app running:
RAM 1661/7765MB (lfb 1165x4MB) SWAP 0/3883MB (cached 0MB) CPU [59%@1420,55%@1420,30%@1420,26%@1420,23%@1420,25%@1420] EMC_FREQ 12%@1600 GR3D_FREQ 33%@204 APE 150 MTS fg 0% bg 3% AO@42.5C GPU@42.5C PMIC@50C AUX@43C CPU@44.5C thermal@43.3C VDD_IN 5112/4423 VDD_CPU_GPU_CV 1554/1131 VDD_SOC 1265/1160
RAM 1662/7765MB (lfb 1165x4MB) SWAP 0/3883MB (cached 0MB) CPU [43%@1420,68%@1420,29%@1420,20%@1420,27%@1420,27%@1420] EMC_FREQ 12%@1600 GR3D_FREQ 33%@204 APE 150 MTS fg 0% bg 4% AO@42.5C GPU@42.5C PMIC@50C AUX@42.5C CPU@44.5C thermal@43.1C VDD_IN 5071/4438 VDD_CPU_GPU_CV 1554/1141 VDD_SOC 1265/1163
RAM 1661/7765MB (lfb 1165x4MB) SWAP 0/3883MB (cached 0MB) CPU [49%@1420,50%@1420,34%@1420,37%@1420,29%@1420,22%@1420] EMC_FREQ 12%@1600 GR3D_FREQ 33%@204 APE 150 MTS fg 0% bg 4% AO@42.5C GPU@42C PMIC@50C AUX@42.5C CPU@44C thermal@42.8C VDD_IN 5071/4452 VDD_CPU_GPU_CV 1554/1150 VDD_SOC 1265/1165
RAM 1661/7765MB (lfb 1165x4MB) SWAP 0/3883MB (cached 0MB) CPU [27%@1420,27%@1420,55%@1420,24%@1420,32%@1420,49%@1420] EMC_FREQ 12%@1600 GR3D_FREQ 24%@204 APE 150 MTS fg 0% bg 7% AO@42.5C GPU@42C PMIC@50C AUX@42.5C CPU@44C thermal@42.8C VDD_IN 5071/4465 VDD_CPU_GPU_CV 1513/1158 VDD_SOC 1265/1167
RAM 1662/7765MB (lfb 1165x4MB) SWAP 0/3883MB (cached 0MB) CPU [35%@1420,33%@1420,28%@1420,32%@1420,47%@1420,49%@1420] EMC_FREQ 12%@1600 GR3D_FREQ 31%@204 APE 150 MTS fg 0% bg 6% AO@42C GPU@42C PMIC@50C AUX@42.5C CPU@44C thermal@42.95C VDD_IN 5152/4480 VDD_CPU_GPU_CV 1594/1167 VDD_SOC 1265/1169
RAM 1661/7765MB (lfb 1165x4MB) SWAP 0/3883MB (cached 0MB) CPU [29%@1420,33%@1420,25%@1420,27%@1420,79%@1420,47%@1420] EMC_FREQ 13%@1600 GR3D_FREQ 40%@204 APE 150 MTS fg 0% bg 7% AO@42C GPU@42C PMIC@50C AUX@42.5C CPU@44C thermal@42.8C VDD_IN 5275/4496 VDD_CPU_GPU_CV 1676/1178 VDD_SOC 1306/1172
So it definitely looks like there is some multi-threading going on.
@Dalus Using gstreamer pipeline actually helps (at least to some extent) to get rid of the occasional delay. This is the VideoCapture I use at this point:
cv::VideoCapture cap("v4l2src device=/dev/video0 ! video/x-raw, width=640, height=512 ! videoconvert ! video/x-raw,format=BGR ! appsink max-buffers=1 drop=True");
Where the important part is max-buffers=1
and drop=True
. I googled that out, but honestly I must say I don’t understand why are those listed after the sink and not before that in the pipe… Can anyone explain?
I have also began studying the VPI interface and I have two questions so far:
-
All of the examples seem to be doing one-time conversion. I.e. load an image, convert, save and that’s it. In my case, this is a continuous video stream so I am not sure which commands do I need to call just once and which need to be called repeatedly (i.e. each frame). From the terminology - I mean, you nVidia guys call it a stream - it looks like that should be set up just once. So, are there any examples using a continuous video stream?
-
During that investigation I realized I could probably also use the gstreamer pipeline to upsample my video. That may as well be a good idea because I’d expect that to be HW accelerated. I think I should be using nvvidconv
. But I was not able to get the pipeline right. It always comes up with some error. I was trying to figure it out in console and this is the line that I felt most confident about:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=512,format=I420 ! nvvidconv !
'video/x-raw(memory:NVMM), width=(int)1280, height=(int)1024, format=(string)I420' ! glimagesink
But no, it doesn’t work. It says:
WARNING: erroneous pipeline: could not link nvvconv0 to glimagesinkbin0, glimagesinkbin0 can't handle caps video/x-raw(memory:NVMM), width=(int)1280, height=(int)1024, format=(string)I420
Any idea?
Thanks guys!