I’m currently working on an python based application performing basic object detection using tensorflow. My application is modeled off of https://github.com/datitran/object_detector_app with the main process taking frames from the camera (which is running in a seperate process), passing them into a queue which is then read by n worker processes which perform the inference. This application works fine if I only have 1 worker performing inference. When I have more than 1 worker each worker will load the graph from memory, process exactly 1 frame of data, then fail without raising an exception. The only reason I can tell it fails is that the application will begin spawning another worker process.
I’ve run the exact same code on a PC running the CPU version of TF with no problem. If anyone has any ideas for why this is happening I would love to hear them. Otherwise, does anyone have an idea to increase the framerate without spawning additional worker processes?