Single source and multi inference models in parallel

To specify my question: If I run video inference and audio inference in parallel w/ same source.

  1. Does issue in each of them block the proc of the other one?
  2. Since video and audio require different streammux, for audio one, I need to “export USE_NEW_NVSTREAMMUX =yes”. So, can both of them exist in one pipeline? If yes, where should I set the switch of USE_NEW_NVSTREAMMUX?