Coral USB accelerator with Jetson Nano


I am planning to run two inference logic having two underlying different models. I am evaluating the option of the Google Coral USB accelerator attached to Nano( using USB). With this, I would be running one inference logic on Coral USB and another using CPU. I would be using GPU only to record the video as well as to generate resultant frames. Do you see any issue with going with this approach? I also required to upload inference results in AWS cloud. which would happen through Jetson CPU.


TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Hi, This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link


1 Like