When use nvprof to profiling some deep neural network from Keras, the profiling process eventual trapped in a deadloop that keep prompting “Replaying kernel cgemm_sm35_ldg_tn_64x8x64x16x16” and never stop (for days, and most probably forever) .
Some details on configuration:
CUDA 8.0/ cuDNN 5.1
Tesla K40 GPU
Sorry for the trouble firstly.
Which nvprof command are you using ?
Basically, if you want to collect more metrics/events, it will replay kernel to get the result.
Would you please just try to collect 1 or just several metrics/events?
I’m using nvprof from cuda 8.
For the metrics/events to be collected, my observation is it looks like nvprof works well in profiling tensorflow deep neural network with events, and some ‘simple’ metrics (e.g. l1_cache_global_hit_rate). However, if the metric implies the use of gputime (say, throughput-like metrics, flop_sp_efficiency), even if it is the only metric to collect, nvprof will be trapped into the “Replaying kernel cgemm_sm35_ldg_tn_64x8x64x16x16” deadloop.
I’ll find a Tesla and try with Tensorflow.
I’ll get back to you once I finished.
I have find a Tesla and installed TensorFlow and Keras.
Here are some details need your confirmation
- Which neural network are you using?
- Any other else need download, like training dataset ? And how and where to get ?
- Can you tell the exact command or steps that can reproduce the issue ?
I reproduced the issue already using tensorflow Inception v3 model. Already report a bug for the dev.
I will update once I got any message.
Thanks for raising this.
Appreciate your effort, looking forward to a quick fix on that from the dev.
This slowdown is probably not because of a deadlock. Deep learning applications launch a very large amount of kernels rapidly, and each of these kernels is usually small and lightweight. These apps also heavily rely on concurrency, which means that multiple kernels are launched concurrently from several streams.
When you attempt to profile a metric or event with nvprof, all the concurrent kernels in the application are serialized - i.e. they are launched one after the other. This is what causes the tremendous slowdown.
Furthermore, metrics like flop_sp_efficiency cannot be profiled in a single pass, and the kernel needs to be replayed to measure them. This further increases profiling time.
The good news is that deep learning apps launch the same kernels over and over again, and that their performance won’t largely vary across different runs. So you can get a meaningful picture of the performance profile using the following steps:
- Use the Visual Profiler to get a trace of the application, without doing any profiling. You can run the application with default settings using the Visual Profiler. Alternatively, you can use the command "nvprof -o foo.nvprof python my_tensorflow_app" and loading the resulting foo.nvprof file into Visual Profiler.
Viewing the trace in the Visual Profiler will give you a good idea of how the application is launching kernels. Note that a pure tracing run like this, without profiling, will not serialize kernels and hence won’t cause the slowdown.
- Now run the application with your previous profiling command. Just as before, you will experience a slowdown. After a few minutes, kill the application early using Ctrl+C. nvprof will report the performance metrics of the kernels finished until that point. You should get metrics for nearly all kernels. This should be meaningful data and representative for the rest of your application since the same kernels repeat again and again.
I hope this helps.