I am working on a project to process images from a camera in real time using a neural network . I have trained the model already and it works quite well with the function predict from Keras and show processed output images of my input images . Now the goal is to do the same thing with the same model using Tensorflow and keras but live. For example I have a video which does not look so good (blurry, dark, distorted) and I want to use my model to process the video so that it gonna look better as it worked on my single images. It would be nice if someone could help me.
You can read camera data into a NumPy buffer with OpenCV.
Below is an example from our user:
So you can feed the image buffer into TensorFlow to get the expected output.
Thanks for the link. My issue is not about reading the frames from the camera. I already did it with Opencv. The issue is about how i gonna use Tensorflow to predict the frame in real time using my trained model. My trained model weighs 90 MB and the idea is to predict from each taken frame from the camera a processed frame as output and put all these processed frames together and reconstruct a video output. I don’t really know how to implement it
You can get the TensorFlow package for Jetson on the below page:
The usage on Jetson is similar to use TensorFlow on a desktop environment.
For example, you can check the below page for inferencing a Keras model: