I am measuring/reading the power numbers dumped by INA through reading the file at /sys/devices/3160000.i2c/i2c-0/0-0040/iio_device/in_power1_input. My understanding is that this should give the power under the VDD_SYS_SOC power domain.
I am interested in the power consumption when the camera is capturing videos at different resolutions and frame rates. I use the following command:
I verified that this indeed is capturing videos at 720p @ 30fps. I got a stable power consumption at 843 mW.
My question is, why does capturing at a lower frame rate lead to a higher power consumption? Is it because the ISP is doing some weird scaling such that when I specified 720p@30 fps it is actually capturing at a higher resolution and/or frame and then scaled the resolution down or downsampling?
Hi,
I think you should do HW measurement(current and voltage). The result of /sys/devices/3160000.i2c/i2c-0/0-0040/iio_device/in_power1_input looks not right.
How did you know the numbers don’t look right. Did you also find that the power comparison was suspicious or you thought the absolute numbers were suspicious?
How would we perform hardware measurement? For what I know, the INA pins are not exposed for easy instrumentation and the only feasible way would be to read through I2C. Is this what you recommend? If so, my understanding is that reading from that file would be essentially the same as reading through I2C using a microcontroller. Is this a correct understanding?
Do you know why the power numbers from that sysfs file is incorrect? How do we make sense of the numbers?
Hi,
From SW perspective, we can check tegrastats
[720p60]
RAM 1109/7851MB (lfb 1549x4MB) cpu [22%@345,off,off,40%@345,19%@347,17%@347] EMC 9%@665 APE 150 GR3D 0%@114
RAM 1110/7851MB (lfb 1549x4MB) cpu [18%@345,off,off,38%@345,18%@345,22%@345] EMC 9%@665 APE 150 GR3D 0%@114
RAM 1110/7851MB (lfb 1549x4MB) cpu [21%@345,off,off,21%@345,36%@345,15%@345] EMC 9%@665 APE 150 GR3D 0%@114
[720p30]
RAM 1232/7851MB (lfb 1519x4MB) cpu [11%@345,off,off,10%@345,18%@345,14%@345] EMC 7%@665 APE 150 GR3D 0%@114
RAM 1232/7851MB (lfb 1519x4MB) cpu [10%@345,off,off,14%@345,15%@345,13%@345] EMC 7%@665 APE 150 GR3D 0%@114
RAM 1232/7851MB (lfb 1519x4MB) cpu [20%@345,off,off,16%@345,9%@345,4%@345] EMC 7%@665 APE 150 GR3D 0%@114
My question is whether the ISP is doing some scaling/cropping for a resolution/fps that’s not natively supported? And is this essentially what’s happening here?
Hi,
What is the power number you see in running
gst-launch-1.0 nvcamerasrc fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)31/1’ ! fakesink silent=false
Power reading from VDD_SYS_SOC is 690 mW, which is slightly higher than idle, which is about 613.
I am curious what happens here since the framerate is beyond the fpsRange. Is the camera still capturing at 30 fps? If so, the power is higher than if we set framerate to 30.
Hi,
The power difference is from different sensor mode.
nvidia@tegra-ubuntu:~$ gst-launch-1.0 nvcamerasrc fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1’ ! nvoverlaysink
NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 1 WxH = 2592x1458 FrameRate = 30.000000 …
In this case, it captures in 2592x1458 and downscales to 1280x720.
Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
NvCameraSrc: Trying To Set Default Camera Resolution. Selected 1280x720 FrameRate = 31.000000 …
I didn’t see the Selected sensorModeIndex = 2 WxH = 1280x720 thing in your output.
Thanks a lot! I was wondering if you could help me dump the info for 720p and 1080p under 30, 60, and 120 FPS and post them here if possible? Mine just doesn’t get me this info. I just wanted to know in what natively supported resolution are those different configurations captured under.
Also, what’s the difference between fpsRange and framerate? Seems like if framerate is not withing fpsRange, the camera is automatically set to 120fps?
I upgraded to 28.1, and got results similar to yours. However, I am confused by the following:
gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1' ! nvoverlaysink
NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 1 WxH = 2592x1458 FrameRate = 30.000000 ...
This is consistent with your pseudo-code. But I was wondering in this case would it be better to use 1280 x 720 @ 120 fps and then downsample it rather than using a much higher resolution?
Also, could you tell us the frequency of the ISP and whether it has DVFS capability?