Create a test platform to do body segmentation in the cloud using the Maxine SDK on a server with A100

are you able to provide directions on how to
create a test platform to do body segmentation in the cloud using the Maxine SDK on a server with RTX?

Currently the virtual background capability is trained for upper body segmentation (for streaming and video conferencing type use cases), so your results may vary using the sdk for full body segmentation. We are working on containers to provide an easier test setup.

Thank you for you r response
could you extend on how exactly start to try with with upper body segmentation?
which page of which documebtation to reffer to implementing it?

whichever will be more straightforward simple to implement? will steps for implementing it at local desktop with GPU be easier than to implemetnt network server to that GPU less clients coudl get conencetd?

@jkrinitt could you point out to the steps to implement server maxine setup so that clients will get their streams processed at the server, but sent back to the clients? Is there a walkthrough for such implementation? Exact steps ?
I tried running on google windows 2019 server with A100
but it wouldn’t hook up with the camera from windows educational laptop through RDP
What are the cloud choices?

Where can ?I find a ful llist of dependencies to run the sdk on Windows server 2019? I run into a while bunch of unlisted dependencies issues starting missed opancv installation, but now it got all like

 AigsEffectApp.exe --model_dir=..\..\bin\models --in_file=..\input\input_003054.jpg --show
Processing ..\input\input_003054.jpg mode 0 models ..\..\bin\models
Error creating effect "GreenScreen"
Error: Cannot find nvCVImage DLL or its dependencies

urls at github point to absent page 404

Note: To download the models and runtime dependencies required by the features, you need to run the SDK Installer.

where do I get ridistributable example with installer for A100 GPU data server ion the cloud? how do I conenct remopte webcam to it?

Hi @Andrey1984

To try AIGS, you can use the sample applications that come with the SDK(details are in the Programming Guide). You can also watch this session for a walkthrough of our API:[E32432]/1_695n1utf

Integration details are highly dependent on your existing architecture and the use case you have in mind.

To use datacenter cards, you would need to use the Linux SDK which can be downloaded from here::Maxine: AI Platform | NVIDIA

As for the remote webcam, you can forward your webcam feed over ssh using your preferred network utility.

Thank you for your response!
Given I got A100 Google cloud server with Windows 2019 Datacenter
My goals are just to test ifg it works with video file input
test if it work with remotely mounted webcam input
Which installation package to download exactly?
Once I figure it out with Windows A100 2019 server I would try Linux implemetnation

How to run the AIGS on Windows so it uses its own opencv/ dll, etc but not ask user to point out missing files?
It won’t work
even after installing Ampere thing to program files it won’t have the examples
but if running from the developers package with examples it won’t show anything
neither package will work for simple basic example, as one of them doesn’t have the example but libraries, and the other misses libraries but even if to try to manage paths manually it won’t show aything

C:\Users\admin\Downloads\samples\AigsEffectApp>AigsEffectApp.exe --model_dir=..\..\bin\models --in_file=..\input\input_003054.jpg --show
Processing ..\input\input_003054.jpg mode 0 models ..\..\bin\models


if to omit copying manually dll files it would fail finding the opencv_world346 dll
but even if I copy them manually it will show nothing
how to get all aspects addressed just to get one sample with greenscreen to work, first on an image?
I installed this
but also tried to execute the sample from this GitHub - NVIDIA/MAXINE-VFX-SDK: NVIDIA Video Effects SDK - API headers and sample applications
the latter should have hooked/ used the libraries from the former , right? but it did not automatically hook up with it

Linux implementation of the SDK kind of works
Which exactly source file to edit to add rtsp support?

Processing /home/sa_105173209221241874565/mysamples/samples/input/file_example_AVI_640_800kB.avi mode 0 models /usr/local/VideoFX/lib/models
OpenCV: FFMPEG: tag 0x34363248/'H264' is not supported with codec id 28 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x31637661/'avc1'
Processing time averaged over 901 runs is 1.57553 ms.