What operating system are you using?
We use Ubuntu-based virtual machines to create a confidential distributed cloud. Multiple VMs are grouped into clusters interconnected by a virtual network. On these servers, clients can deploy their applications and databases, train machine learning models, and store data with confidence that their information is protected at all stages. At the same time, there is no direct interaction between the client and the server owner – everything is orchestrated automatically through blockchain.
We chose Ubuntu for its excellent and continuously updated support for various TEE technologies. Many years ago, when we first started working with the initial version of Intel SGX, we found Ubuntu to be the most convenient distribution. Ubuntu was among the first to include built-in Intel SGX support in the kernel by default, eliminating the need to install drivers and other components manually.
Moreover, Ubuntu has proven to be highly reliable under high-load conditions, making it a solid choice for scalable infrastructure. Its well-maintained LTS versions provide long-term stability, while its extensive package ecosystem and active community support enable rapid adaptation to evolving workloads. The ability to easily integrate with cloud-native tools, containerization solutions, and automation frameworks further enhances its scalability, allowing us to efficiently manage distributed confidential computing environments.
@Nukri.Super Do you support the Blackwell B200? We have a high volume of AI workloads processing media content and the H100 is proving too expensive. What’s the pricing for B200 confidential computing mode?
The Nvidia B200 is compatible with Super Protocol’s confidential computing.
“Advanced confidential computing capabilities protect AI models and customer data without compromising performance, with support for new native interface encryption protocols, which are critical for privacy-sensitive industries like healthcare and financial services.” — Source.
A recent comparison between the H100 and B200 highlighted Blackwell’s performance, delivering 25x more revenue at 20x lower cost per token, compared to the NVIDIA H100 just four weeks ago.
The cost of confidential computing in Super is not significantly different from the standard (non-confidential) mode. We only charge a small additional percentage per transaction.
In terms of actual $ costs, Super is connected to various providers and data centers, so prices may vary, and we do not influence their pricing.
You can try out select models for free on servers connected to the Super Protocol directly on our website.
Moreover, you can integrate your own server into Super, making the use of confidential computing almost seamless for you. If needed, you can also share your unused capacity with other projects, turning it into an additional resource.
Do you have some performance charts for CC overhead when NVLink encryption is used in multi-GPU scenarios that require very high bandwidth such as DeepSeek V3 671B?
Currently (in public mode) NVLink encryption is not yet supported in confidential computing environments. In high-bandwidth multi-GPU scenarios alternative execution models—such as vLLM which enables layer-wise parallelization—can serve as an effective workaround. We have rigorously benchmarked both approaches in confidential settings. NVLink delivers substantial performance advantages in standard configurations vs vLLM.
We’re working on a medical AI project that securely records and transcribes doctor-patient conversations. What’s the best way to integrate with Super Protocol?