Hi @gavin.goodier, I don’t think the multiprocessing module would work for this, because those spawn separate processes, and the CUDA memory can’t be shared across processes. Also, the videoSource implementations are already threaded underneath (in the C++ code that implements them). So I would just keep one Python thread for your application.
Also if you are able to use the same detection model on both webcams, then you only need one instance of detectNet object (and you can process multiple images with it). Running multiple detectNet models across multiple threads may have diminishing returns on Nano because the GPU utilization would already be 100%.