Selectively run inference on a batched video input

I tried that but I get into errors, I made a more specific forum post on that at Deepstream parallel inference failing to produce video output with 'nvmultistreamtiler'(Deepstream parallel inference failing to produce video output with 'nvmultistreamtiler'