Passing an image directly to imagenet without saving it into file

Im making my own inference program where i capture an image using the camera and run imagenet on the captured image . Upon looking at the guide on how to create my own python image recognition program

I noticed that it uses a saved file for the image.

# load an image (into shared CPU/GPU memory)
img = jetson.utils.loadImage(args.filename)

This would mean that i have to:
Capture the image → Save it into disk → and load it back up again.

Which feels kind of inefficient. Is there a way to skip the saving into disk part and load the image directly into shared gpu/cpu memory?

Currently how i capture my image is using CV2 (4.5.1 CUDA):

gstreamerCamera = cv2.VideoCapture(gstreamer_pipeline,cv2.CAP_GSTREAMER)
result,capturedImage =


jetson_utils also supports reading data from the camera directly.
Please find the sample below:

$ ./ csi://0                 # MIPI CSI camera
$ ./ /dev/video0             # V4L2 camera


Okay i got it the image data from jetson.utils

input = jetson_utils.videoSource("csi://0", argv=["--framerate=14","--width=4608","--height 2592"])

while True:
    # capture the next image
    img = input.Capture()

My next problem is how do i post process this such as cropping and saving a copy? the object type it return is not the typical numpy.ndarray image (like what cv2 returns). It is <class 'jetson.utils.cudaImage'>. Is it possible to convert this into a numpy array do my post processing there and convert it back to type.cudaImage?

Else what does jetson.utils.cudaImage postprocessing offer can you please link me the documentation for that?

I actually found it just now :)


There are also some examples in the jetson-utils repo.
For example:

CUDA → Numpy

Numpy → CUDA


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.