Is it possible to have a workstream supporting multi-device inference using both the Xaviers A and B on DRIVE AGX? We’re possibly running into some speed issues while running an application on Xavier A itself, and were wondering if we can also incorporate Xavier B in our inference pipeline.
Running at lower precisions (FP16 or INT8) and utilizing TensorRT is also being looked at in the meantime, but would love to know if this parallel workstream is supported or not.