My friend’s Linux computer:
GPU: GTX970 4GB
Effect of running the same program: 50 frames
CPU: i7-9750H 2.59GHz
GPU: RTX2060 6GB Driver：441.66
Effect of running the same program: 1 frames
For PTX: I did not use the sutil :: getPTX method in the SDK. I compile the PTX code by nvrtcCompileProgram () function, fill in “compute_75” and other parameters. I have tested it in the SDK program without any problems.
Next, I used the performance profiler provided by VS2017 to analyze the optixWhitted program provided by the SDK and my flawed program.
For the optixWhitted program: GPU usage slowly climbs to about 90% and stabilizes, while CPU usage is relatively low, which is ideal.
For my own program: the CPU usage is relatively low; but the GPU usage is almost low, less than about 10% and it does not continue to fluctuate.
I don’t know what keeps my GPU almost idle, and I’m sure I’m not using Optix Prime.
My program uses an OpenCV window to display each frame of the image. Through performance acceleration, the number of frames is higher.
If there is no detailed description of the problem please be sure to ask me to fully describe the problem.
I don’t know what to do with this weird question, and I am eager to get your guidance. Thank you for your busy schedule!