Accelerator Kernel Timing info


I’m trying to analyze performance of my code. I found -ta=nvidia,time option, which prints the accelerator kernel timing information.
Unfortunately I’m working on a program that is closed. I have only a source code of a one procedure. Standard I/O are controlled by other procedures, so that my procedure cannot print anything to std output. I can only print to files. So how can I print the timing information?


Hi Jacek,

Can you run your program using the ‘pgcollect’ utility? (i.e. ‘pgcollect my.exe -args’).

This will produce a ‘pgprof.out’ profile file that can be read by PGPROF and will include accelerator performance.

  • Mat


I’m rewening the topic. I’ve tried to run the program using pgcollect. It generated few files, one of them is pgprof.out.
Then, I open pgprof and open new profiling session.

Profile: path to pgprof.out
Executable: path to executable program
Source: path to program sources

Then I get error info that the pgprof.out and/or the executable program are invalid. What can be wrong?
Additionaly, when I run the program through pgcollect it is considerably slower. (sic!)

But for me, it would be better to gain access to terminal and output the kernel timing info from -ta=nvidia,time option. In the code that I’m working with, there are a few variables for the logical i/o unit numbers. They are defined as parameters in a separate file (I have access). The variable for writing to terminal is called ITTY (equals 6). Using
write(itty,) ‘Something…’ or write(,*) ‘Something…’ is printing a text on the terminal, but the timing info does not appear anywhere. Where is the accelerator timing info printed out by default? (stdout? stderr?)


Funny thing - I’ve again learn that it’s all about asking. When I ask about something, the answers often comes by it’s own, without somebody’s answer.

I’ve found that the option -ta=nvidia,time has to be added also to the linking command, not only to the compilation stage. This way I have a beautiful accelerator kernel timing info. :)

Thanks for reading!