Hi, I’d like to record ram usage and gpu mem usage in linux system when inferring a plan file, Is there a solution to do so?
I’ve used vmRSS in /proc/self/status and cudaGetMemInfo api to calculate this.
I do this in the following process.
- enter main, record ram usage of current process and gpu mem currently used in the system.
- deserialize a planfile and run a inference.
- do the same thing with step 1.
- compute gpu mem usage diff of step 3 and 1, which includes ram usage.
so the ram usage for current process is got from vmRSS,
and gpu mem usage is the diff in step 4 minus ram usage. but the final result is I got minus gpu mem usage.
Am I doing it right? if not, plz help me figure out a right solution.