my program generate image using ogl api,then readout the image using “glReadPixels()”,then send it to nv cuda encoder to generate h264 stream;now I want to optimize my program using “NVVE_DEVICE_MEMORY_INPUT” flag,I think if I can deliver the image in device to encoder,the IO bandwidth will be saved markedly,so the performance of program should be increased;What I think is right?Whether the optimization is worth to do?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
performance of cuda h264 encoder | 1 | 5236 | March 5, 2013 | |
Passing openGL rendered view to Nvidia Video Encoder | 0 | 1183 | February 8, 2012 | |
Getting the most H.264 decode performance | 0 | 784 | June 9, 2022 | |
h264 encoding Newbie question :-) ... | 1 | 2269 | February 2, 2009 | |
How to do pingpong operation to improve cuda video decoder’s performance? | 0 | 1222 | January 22, 2015 | |
Video Encoder | 5 | 30101 | March 8, 2011 | |
Using VPU's efficiently | 0 | 397 | January 20, 2021 | |
GPU for video encoding | 0 | 6418 | October 5, 2007 | |
C# support for h264 encoding/decoding | 1 | 1755 | May 10, 2022 | |
Inquiry regarding converting NV12 to RGBA. what is faster? | 0 | 677 | February 27, 2023 |