Multi-Process Service (MPS) client/server functionality on Jetson TX2

I want to utilize the CUDA MPS functionality on the Jetson TX2 for concurrent task execution. But I realized that MPS may not be possible on the TX2. I can’t find any documentation proving this claim, however. If anyone has details regarding this, please shed some light/point to a resource.

Thanks,
Sai

MPS is strictly x86…Jetson is aarch64.
https://en.wikipedia.org/wiki/MultiProcessor_Specification

This doesn’t seem to be related to CUDA MPS. If I’m not wrong it’s something to do with Intel multi-processor architectures; not related to MPS client/server.

I don’t know about MPS related to CUDA, so it seems I answered in the wrong context.

Hi,

TX2 doesn’t support MPS.
MPS has a dependency on nvidia-smi and NVMLibrary, which are not available on Jetson platform.

Here is a document for your reference:

Thanks! So, there is no support at all for concurrent execution of tasks on the Jetson, if they are coming in from different processes?

Let’s track this issue on topic 1024457:
https://devtalk.nvidia.com/default/topic/1024457/jetson-tx2/concurrent-task-execution-from-multiple-processes-on-jetson-tx2/

Will MPS be supported on Jetson or Drvie PX in the future?

Any reasons that nvidia-smi and NVMLibrary are not supported on Jetson platform?

Hi,

We don’t have a concrete schedule for NVML on ARM currently.
Sorry for the inconvenience.

Any reasons that nvidia-smi and NVMLibrary are not supported on Jetson platform?

Regards,
https://fortlauderdaleprocessservers.org/

nvidia-smi is used with PCI interface only. The GPU on a Jetson is directly integrated to the memory controller. PCI interface software has no possibility of working.

I am not familiar with NVMLibrary.

Hi,

Is there any schedule for NVML on ARM now? MPS is an important feature to accelerate deep learning inference.
Without it, users could fail migrating their applications from x86 machine to arm machine, e.g. Xavier & Pegasus.

Thanks