Multiple sources being used is messing with inferencing


I took the code here:

We trained it for our office colleagues’ faces, and were seeing decent performance.

But, as soon as I create 2 parallel camera streams (simply creating 2 instances for 2 different cameras), and put the tensorrt inferencing inside a for loop, I see that it cannot anymore recognize more than 1 person in the same camera stream.

What works:
Single camera with any number of people
Multiple camera with each camera having only one face

What doesn’t work:
Multiple camera with any one camera having more than one face detected.
“lab” variable shows “-1” for all the recognition labels except the first one.

I have tried it on saved video file to be sure, and for same instance, single source setup is able to classify all the detections.