Run pytorch custom inference on Jetson Nano’s GPU but it stop

i m trying to run a code on jetson nano 4gb developer kit but it stuck when it enter this line of code feat = model(img, cam_label=camids, view_label=target_view)
i m using pytorch==1.8.0 and torchvision == 0.9.0 installed for PyTorch for Jetson it work and i tested with cuda
the code it run fast on gtx 950M it take time 0.05 s i have two model weigh one is deit_base_distilled_patch16_224-df68dfff.pth 349.4 MB and the other is deit_veri.pth 413.2 MB
this is the code :
camids = torch.tensor([0])
target_view = torch.tensor([0])
with torch.no_grad():
img = img_tensor.to(device)
camids = camids.to(device)
target_view = target_view.to(device)
feat = model(img, cam_label=camids, view_label=target_view)
the Output shape is [1,3840]
i don’t understand why it stuck there can you help me please ?

1 Like

Hi,

Could you monitor the system with tegrastats to get the memory status?

$ sudo tegrastats

Thanks.

thank you for your response it worked yesterday it was a problem of ram but gpu barely working i don’t understand why it took 1 min 25 second to run that to slow any solution please ?

Hi,

Based on the log, the memory usage is quite high but the GPU is idle mostly.
Are you able to try another lightweight model to see if it helps?

Thanks.

it work now the first time it take 30 second but after it take 0.05 second which is great i just build environment to keep it running and disable the desktop GUI for more ram
thank you for your help .