Convert Images From The Camera Format to the Neural Network Input Format using CSI Camera

This is the code for the USB camera:

import traitlets
from IPython.display import display
import ipywidgets.widgets as widgets
from jetbot import Camera, bgr8_to_jpeg

camera = Camera.instance(width=224, height=224)
image = widgets.Image(formats=‘jpeg’, width=224, height=224)
blocked_slider = widgets.FloatSlider(description=‘blocked’, min=0.0, max=1.0, orientation=‘vertical’)

camera_link = traitlets.dlink((camera, ‘value’), (image, ‘value’), transform=bgr8_to_jpeg)

display([widgets.Hbox([image, blocked_slider]), speed_slider]))

However, I am using the CSI camera and when trying to make the image a widgets, I’m struggling. Here is what I’ve done so far:

Im not sure how to do this with the CSI camera. Any help would be appreciated. Thanks in advance.

hello akashsivapalanspam,

CSI camera and USB camera they’re very different. you cannot use the same code to process them.
what’s the requested format of your neural network? could you please refer to MMAPI sample, 09_camera_jpeg_capture.

I’m using the Resnet-18 neural network and I want to test if the model works. I have a “best_model_resnet18.pth” from training the dataset and I want to see if it works. It consists of chair images where some are ‘blocked’ and some are ‘free’.

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks

hello akashsivapalanspam,

what’s the input format of the model? JPEG images?