Error encountered when using yolo for image classification.
- The first few times the inference was working fine, after a while the problem kept occurring.
- I checked the GPU status at the time of the error and it seems to be normal
File “ultralytics\engine\results.py”, line 1458, in top1
return int(self.data.argmax())
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
File “ultralytics\engine\model.py”, line 180, in call
return self.predict(source, stream, **kwargs)
File “ultralytics\engine\model.py”, line 558, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File “ultralytics\engine\predictor.py”, line 173, in call
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File “torch\utils_contextlib.py”, line 35, in generator_context
response = gen.send(None)
File “ultralytics\engine\predictor.py”, line 254, in stream_inference
with profilers[0]:
File “ultralytics\utils\ops.py”, line 46, in enter
self.start = self.time()
File “ultralytics\utils\ops.py”, line 61, in time
torch.cuda.synchronize(self.device)
File “torch\cuda_init_.py”, line 792, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
gpu state after the error: