I have trained a model to recognize when I am performing Sign Language and I have managed to load it along with its state dictionary. The thing I am trying to achieve is to load my model into the notebook, start a stream from my webcam and apply the model to the webcam stream. I then just want it to print out or display on the frame the command that it is seeing,
I did it with the following code:
model = torchvision.models.resnet34()
model.fc = torch.nn.Linear(512,6)
output= model(image) model=model.eval()
I am getting this error in model.eval()
AttributeError: ‘_IncompatibleKeys’ object has no attribute ‘eval’
I am also having webcam issues.
Using the code below:
!ls -ltrh /dev/video*
from jetcam.usb_camera import USBCamera
camera = USBCamera(width=224, height=224, capture_device=0)
camera.running = True
camera.read() gives me an error saying:
RuntimeError: Cannot read directly while camera is running
Which is odd since this issue does not happen in the hello_camera notebook
Alternatively I have tried to run this model on my laptop using opencv, but am having trouble getting the model to predict the frames.
ret,frame = cam.read()
if cv2.waitKey(1) & 0xFF == ord(‘q’):
This gives an error
RuntimeError: Could not infer dtype of ToTensor
Interestingly enough, I am not having issues with loading the models in Jupytre on my laptop. I got the All keys Matched Successfully message, but I’m having trouble with applying the model to webcam frames to classify what it sees in the frames.
Please do let me know if you need me to clarify the issue and I really appreciate any help I can get