this is the error I’m getting when I try to run my code:
2022-05-08 10:39:07.301695: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2022-05-08 10:39:07.303992: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-PO39R5L
2022-05-08 10:39:07.304123: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-PO39R5L
2022-05-08 10:39:07.304387: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File “C:\Users\user\Desktop\face_mask_detection\src\face_mask_detection.py”, line 4, in
model=load_model(“./model2-009.model”)
File “C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\utils\traceback_utils.py”, line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File “C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\saved_model\load.py”, line 977, in load_internal
raise FileNotFoundError(
FileNotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./model2-009.model\variables\variables
You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device
option in tf.saved_model.LoadOptions
to the io_device such as ‘/job:localhost’.
Any ideas as to why?