resource utilization

I use tegrastats monitor TX1 resource utilization base on some different situation.

Whe I use commond “gst-launch-1.0 v4l2src device=/dev/video0!‘video/xraw,format=I420,width=1280,height=1024,framerate=60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),width=1280,height=1024,format=I420’ ! nvoverlaysink sync=false” capture YUV422 camera and display on screen,tegrastats show " cpu [92%,0%,0%,1%]@1734 EMC 6%@1600 AVP 16%@12 NVDEC 268 MSENC 268 GR3D 0%@76 EDP limit 1734".

When I use application “tegra_multimedia_api/samples/12_camera_v4l2_cuda”,tegrastats show “cpu [42%,11%,8%,6%]@518 EMC 21%@408 AVP 84%@24 NVDEC 268 MSENC 268 GR3D 36%@76 EDP limit 1734”.

When tx1 does not run any user application,tegraststs show"cpu [0%,4%,0%,1%]@102 EMC 6%@408 AVP 66%@12 NVDEC 268 MSENC 268 GR3D 0%@76 EDP limit 1734".

1) Why does different application influence cpu run frequency?
2) Why is AVP utilization rate when gst was running less than tx1 was running nothing user application?
3) How does improve cpu and EMC working frequency?
4) Why does GPU utilization rate equal zero when gst application was running?
5) does convert YUV422 to RGB need use 40% GPU resource,if yes,whether TX1 capture RGB format camera to display better than capture YUV camera?

Hi LeoNardo,

In gstreamer command, there is CPU buffer copying to NvBuffer between v4l2src -> nvvidconv.
In 12_camera_v4l2_cuda, it uses NvBuffer directly
2)
AVP does not take a role after booting, so you may ignore the difference
3)
By default both run DFS(dynamic frequency scaling). You can run at max performance by ‘sudo ./jetson_clocks.sh’
4)
The gstreamer command does not have CUDA processing but 12_camera_v4l2_cuda has
5)
We only support I420 in camera input. You can convert it to RGBA via video converter, or CUDA

Hi,

Thanks for your response.
what is the best way to display video output without using GPU resource? We cannot see the source code of gstreamer.what is the best way to display video output using less CPU resource but maximum GPU resource?Could you provide sample code for both above cases?

Please share detail about your usecase. Is your sensor USB, YUV or BAYER? Video preview? Buffer post-processing via CUDA? Video encoding? TensorRT?

Hi DaneLLL,

Our sensor is YUV422(16bit-bus) and data channel is MIPI-CSI.Currently,our application just need capture YUV video data and convert to RGB video data to display.No post-processing via CUDA,video encoding and TensorRT.

Hi leonardo, please refer to tegra_multimedia_api\samples\12_camera_v4l2_cuda