I have created a tensorflow model, converted it into onnx format, and used trtexec to generate a .trt file. (I followed along with this notebook: tensorflow2 through onnix)
I’m able to run the .trt file using images loaded in numpy format as shown in the notebook.
I need help regarding using cuda images (<class ‘jetson.utils.cudaImage’>).
please help how to use the cuda image class and .trt file for inference
Edit:
I’m providing more detatils:
So i have trained a resnet50 model with input shape (1,224,224,3) and output shape (1,1) so it just predicts the probability of 1 class.
I was able to run inference using the .trt file using images loaded with numpy.
Now i wish to use this .trt model to classify based on webcam video at /dev/video0.
I wanted to use the jetson.utils module to read images from webcam and classify but im getting a cuda image and I don’t know how to use cuda image with .trt
so any help regarding this is appreciated. Thank you
In the sample above, host_inputs[0] is the CPU buffer and the cuda_inputs[0] is the corresponding GPU buffer.
So to run TensorRT with jetson-utils, you can just copy the CUDA image into the cuda_inputs[0].