I am planning to run two inference logic having two underlying different models. I am evaluating the option of the Google Coral USB accelerator attached to Nano( using USB). With this, I would be running one inference logic on Coral USB and another using CPU. I would be using GPU only to record the video as well as to generate resultant frames. Do you see any issue with going with this approach? I also required to upload inference results in AWS cloud. which would happen through Jetson CPU.