No Cuda available upon checking and also Multi-Process Services

Hello, I have concern regarding on I can’t run my program if there is no MPS.

elif torch.backends.mps.is_available():
AttributeError: module ‘torch.backends’ has no attribute ‘mps’

so I tried on checking mps and cuda availability on my device I found this code in the internet.

import torch

cuda_available = torch.cuda.is_available()

mps_available = cuda_available and int(torch.version.cuda.split(‘.’)[0]) >= 9

if cuda_available:
print(“CUDA is available.”)
else:
print(“CUDA is not available.”)

if mps_available:
print(“MPS is available.”)
else:
print(“MPS is not available.”)

upon checking there is no available mps and cuda on my jetson nano but by using nvcc --version there is a current installed 10.2 cuda. Can you help me fix this problem. Thank you in advance for your assistance.

@Ken9112 I’m unfamiliar with MPS, but looks like it is for MacOS Metal GPU support and not applicable to NVIDIA Jetson:

https://pytorch.org/docs/stable/notes/mps.html

So the program should be modified to not use MPS. Or since PyTorch is open source, you can recompile it how you want (although that may take prohibitively long on Nano)

Hi sir dusty,

This is the error when trying to run easyocr on my program using gpu.

for forum2

but if I will run the easyocr using cpu it can run using images but when integrating it with my program because of real time video feed it the program will crash and will display an error “Illegal Instruction (core dumped)”

@Ken9112 the “Illegal Instruction (core dumped)” sounds like this issue (try running export OPENBLAS_CORETYPE=ARMV8 in your terminal first)

Regarding torch.mps, I think you may need to edit or patch easyocr to remove the reference to torch.mps - either the torch you’re able to run on Nano is too old to have it (torch 1.10 for Python 3.6 and CUDA 10.2), or it’s not present because those APIs aren’t available outside of Apple silicon. It is pretty common that I have to patch packages like this in my dockerfiles (less so since I started using USE_DISTRIBUTED=on in all my PyTorch wheels in the containers, so at least there is torch.distributed)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.