Hi,
test a: ./video_convert /data/daibo/river_5760x3000.yuyv 5760 3000 YUYV /data/daibo/river_rgb 5760 3000 ARGB -p -fm 1 -s
07-11-2024 08:51:11 RAM 18405/30593MB (lfb 148x4MB) SWAP 609/15296MB (cached 3MB) CPU [1%@2201 ,1%@2201 ,7%@2201 ,1%@2201 ,1%@2201 ,1%@2201 ,1%@2201 ,0%@2201 ,3%@2201 ,0%@2201 ,3%@2201 ,6%@2201 ] EMC_FREQ 3%@3199 GR3D_FREQ 0%@[1298,1298] VIC_FREQ 99%@729 APE 174 CV0@-256C CPU@53.656C Tboard@41C SOC2@50.406C Tdiode@42C SOC0@50.437C CV1@-256C GPU@48.5C tj@53.656C SOC1@50.687C CV2@-256C VDD_GPU_SOC 5212mW/5039mW VDD_CPU_CV 1604mW/1971mW VIN_SYS_5V0 5626mW/5373mW NC 0mW/0mW VDDQ_VDD2_1V8AO 1404mW/1208mW NC 0mW/0mW
test b: ./video_convert /data/daibo/river_5760x3000.yuyv 5760 3000 YUYV /data/daibo/river_rgb 5760 3000 YUV420 -p -fm 1 -s
07-11-2024 08:54:09 RAM 18408/30593MB (lfb 148x4MB) SWAP 609/15296MB (cached 3MB) CPU [3%@2201 ,2%@2201 ,3%@2201 ,2%@2201 ,1%@2201 ,0%@2201 ,6%@2201 ,1%@2201 ,6%@2201 ,0%@2201 ,0%@2393 ,4%@2201 ] EMC_FREQ 4%@3199 GR3D_FREQ 0%@[1292,1292] VIC_FREQ 99%@729 APE 174 CV0@-256C CPU@53.625C Tboard@41C SOC2@50.218C Tdiode@41.75C SOC0@50.187C CV1@-256C GPU@48.281C tj@53.625C SOC1@50.468C CV2@-256C VDD_GPU_SOC 5214mW/4995mW VDD_CPU_CV 2006mW/1969mW VIN_SYS_5V0 6028mW/5562mW NC 0mW/0mW VDDQ_VDD2_1V8AO 1705mW/1340mW NC 0mW/0mW
I tested two case with VIC shown above,
testa: input as YUYV, output as ARGB
testb: input as YUYV, output as YUV420
WHY emc usage for testa is lower than testb?
I thought the emc usage should be higher in testa than testb, since ARGB format is larger than YUV420.
Hi,
Please run the script and try again:
VPI - Vision Programming Interface: Performance Benchmark
This would enable VIC at maximum clock and shall bring maximum throughput.
Hi,
I did set maximum clocks for CPU, GPU, EMC and VIC,
I tried ./clocks.sh --max and retest, still got the strange output:
for testa:
07-12-2024 02:27:57 RAM 18773/30593MB (lfb 107x4MB) SWAP 609/15296MB (cached 3MB) CPU [3%@2201 ,1%@2201 ,1%@2201 ,7%@2201 ,1%@2201 ,2%@2201 ,0%@2201 ,0%@2201 ,0%@2201 ,0%@2201 ,5%@2201 ,1%@2201 ] EMC_FREQ 3%@3199 GR3D_FREQ 0%@[1292,1292] VIC_FREQ 99%@729 APE 174 CV0@-256C CPU@51.468C Tboard@40C SOC2@48.562C Tdiode@40C SOC0@48.437C CV1@-256C GPU@46.812C tj@51.468C SOC1@48.906C CV2@-256C VDD_GPU_SOC 5212mW/5212mW VDD_CPU_CV 2005mW/1932mW VIN_SYS_5V0 5635mW/5660mW NC 0mW/0mW VDDQ_VDD2_1V8AO 1406mW/1449mW NC 0mW/0mW
for testb:
07-12-2024 02:28:59 RAM 18773/30593MB (lfb 106x4MB) SWAP 609/15296MB (cached 3MB) CPU [2%@2201 ,1%@2201 ,1%@2201 ,7%@2067 ,0%@2201 ,0%@2201 ,1%@2201 ,6%@2201 ,1%@2201 ,1%@2201 ,1%@2201 ,2%@2201 ] EMC_FREQ 4%@3199 GR3D_FREQ 0%@[1292,1292] VIC_FREQ 99%@729 APE 174 CV0@-256C CPU@50.468C Tboard@39C SOC2@47.218C Tdiode@38.75C SOC0@47.343C CV1@-256C GPU@45.375C tj@50.156C SOC1@47.531C CV2@-256C VDD_GPU_SOC 5212mW/5092mW VDD_CPU_CV 2005mW/1884mW VIN_SYS_5V0 6038mW/5744mW NC 0mW/0mW VDDQ_VDD2_1V8AO 1708mW/1471mW NC 0mW/0mW
what to do next?
Hi,
The converted frame data is in NvBufSurface. It is mapped to CPU and saved to a file. So the loading may lie in file IO instead of memory IO.
Hi,
I checked the code, there is no file io when doing 3000 PERF_LOOPs:
for (int i = 0; i < count; ++i)
{
ret = NvBufSurf::NvTransform(&tctx->transform_params, tctx->in_dmabuf_fd, tctx->out_dmabuf_fd);
if (ret)
{
cerr << “Error in transformation.” << endl;
goto out;
}
}
Hi,
The function works like:
Schedule task to hardware converter(CPU)
Wait for hardware done the task(CPUI)
Hardware engine does the conversion and notifies the task is done
Ext NvBufSurf::NvTransform funtion call
Since the function is called in loop and done continually, it may have certain CPU usage.
Hi,
I suspect that vic does scaling and rotation only in argb color space,so that in testa: yuyv → argb → rotate & scale → argb; while in testb yuyv->argb->rotate&scale->convert to yuv420 not yuyv->yuv420->rotate&scale->yuv420.
Could you help to confirm this?
Hi,
The design of hardware engines is private. Your idea may be correct.
1 Like
system
Closed
August 9, 2024, 10:31pm
11
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.