Camera.capture getting stuck when using multiprocessing

I am running into an issue with camera.capture while trying to run multiple detections at the same time. I am using two usb webcams that are being triggered seperately based off of inputs to the board and then based on the detections outputting from the board.

I’m using the multiprocessing library to do this and the processes are starting up just fine. The problem is occurring when I try to take an image from inside one of those processes.

Here is the script i’m running:
ThreadNailDetection.py (5.7 KB)

Here is where it gets stuck on the command line: (Sorry for not having this in txt format)

I’ve also tried taking a picture with cv2 and converting that to a cudaimage to then process, but it gets stuck in jetson.utils.cudaFromNumpy

Anybody able to help me solve this issue?

Just realized a small mistake in the script I uploaded. Here is the most up to date script I was running.

ThreadNailDetection.py (5.7 KB)

Hi @gavin.goodier, I don’t think the multiprocessing module would work for this, because those spawn separate processes, and the CUDA memory can’t be shared across processes. Also, the videoSource implementations are already threaded underneath (in the C++ code that implements them). So I would just keep one Python thread for your application.

Also if you are able to use the same detection model on both webcams, then you only need one instance of detectNet object (and you can process multiple images with it). Running multiple detectNet models across multiple threads may have diminishing returns on Nano because the GPU utilization would already be 100%.

Hey @dusty_nv thanks for the speedy response.

The problem with running a single thread is that the cameras are triggered off of seperate real life events. My goal is to run both cameras simultaneously but also independently of each other.

My main problem with running a single thread is holding the output on for 0.4 seconds if the camera is triggered. Because it’s possible for one to be triggered and then sleep for 0.4 seconds and by the time that sleep is done the other cameras trigger could have come on and gone back off again.

Any suggestions on how to get around that?

Thanks

I would just have a frame counter for each camera, and when a trigger occurs, set that to 15 frames (or whatever your framerate is * 0.4). Then each frame, decrement the counter. Kind of like this psuedocode:

cameras = [camera_0, camera_1]
frame_counters = [0, 0]
triggers = [False, False]

while True:
       # first poll your triggers (this would be your code)
       triggers = update_triggers()

       for idx, trigger in enumerate(triggers):
            if trigger:
                 frame_counters[idx] = 15

        for idx, frame_counter in enumerate(frame_counters):
             if frame_counter <= 0:
                  continue

             img = cameras[idx].Capture()
             detections = net.Detect(img)
 
             # do something with the detections

             # decrement this camera's frame counter
             frame_counters[idx] -= 1