Problem with Torch Computing

Hello!
I already have a Jetpack installed on the Jetson nano and all tests passed.
When I was preparing to perform convolutional computations using torch.nn.conv2d, I noticed that the first calculation was very slow, whereas subsequent calculations were normal. I would like to know how to solve this problem.
And here is my test code:

import torch
import torch.nn as nn
import time

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

epoch = 0
while epoch < 3:
    start_time = time.time()
    input_tensor = torch.randn([1, 3, 256, 256]).float().to(device)
    conv_layer = nn.Conv2d(3, 3, kernel_size=3, bias=True).to(device)
    
    output = conv_layer(input_tensor)
    # print(output)
    print(time.time() - start_time) # first epoch took 5 minutes, and later epoch took 5 ms.
    epoch += 1

Hi,

This is expected as the first run may contain some initialization and setup jobs.
Adding some warmup loops should help to avoid this issue.

Thanks.

Hi, Thanks for your reply!
I see, but is first loop warmup took too much time?

Hi,

Please run your script to check if any JIT is triggered first?
You will get some errors with the below command if JIT is required.

$ CUDA_DISABLE_PTX_JIT=1 python3 [app]

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.