How to implement Real-Time Vision AI from digital Twins to cloud-Native Services?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2.3-1+cuda12.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 540.2.0
• Issue Type( questions, new requirements, bugs)

How to implement below one?

It is MTMC/RTLS on x86 dGPU. You can apply it in: Metropolis Microservices 1.0 Early Access | NVIDIA Developer

Approved

Application status:

Your application has been approved.

After that, how to access it ?

Duplicate topic: How to run ? Real-Time Vision AI From Digital Twins to Cloud-Native Deployment - Intelligent Video Analytics / Metropolis Microservices Developer Preview - NVIDIA Developer Forums

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Issue, unsolved. Thank you

Let’s keep only one topic for disussion: https://forums.developer.nvidia.com/t/how-to-run-real-time-vision-ai-from-digital-twins-to-cloud-native-deployment/306634/16
Please correct me if this is not duplicated.

Yes, thank you.

Howhevere, issue is always unsolved from last month.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.