Virtualization for PCIe?

GPU operator virtualizes PCIe and passes it through daemonset to the pods running in k8s. Is there any solution for Jetson family ?

Looking for a coontainer solution to go seemlessly go from Isaac Sim to edge device and vice-versa.

This would allow to train, manage and maintain fleets of edge devices at scale.


Sorry, would you mind sharing more information about your use case?
Do you want to connect a discrete GPU to Orin via PICe?


I want to go from Isaac Sim to a Edge robot at scale.
The Nvidia deepops uses pcie virtualization to deploy k8s.
If the same is done on edge devices I would be able to easily deploy whatever application I developed in the Sim into a fleet of robots.


deepops is based on DGX servers which are equipped with dGPU (connected through PCIe).
However, Jetson uses an on-chip iGPU so the mechanism is quite different.


I think I should have worded better my post.
I am aware of the hardware differences dGPU vs iGPU.
I am curious if there are plans to bring both worlds closer by hidding some abstractions.

My concern is that even tought there is documentation for porting code from a discrete GPU (dGPU) attached to an x86 system to the Tegra® integrated GPU (iGPU), and whatnot …
It is not yet friendly at scale.

Most of my reasoning comes from seeing all this incredible and fast progress (Even lost track as names change so fast). Great MLops solutions for the DGX systems and cloud computing such as AI enterprise. It felt natural that might be a solution for fleet deployment of models made in dGPU that would overcome the challenges of iGPU.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.