I’m currently working on a hobby project where I’ve trained a StyleGAN on abstract art, with the output size of 1024x1024.
I’m now in the process of putting up a screen to display the generated images. My idea is to connect an edge GPU to the screen that both will run a GUI and display the images, but also run the inference. I’m then thinking about connecting a simple button to the GPIOs that triggers a new image.
This is the implementation I’ve used:
GitHub - taki0112/StyleGAN-Tensorflow: Simple & Intuitive Tensorflow implementation of StyleGAN (CVPR 2019 Oral)
Here are the size of the checkpoint files:
Checkpoint: 2.94 GB
Index file: 75 kB
Meta file: 70.5 MB
Do you guys think a Nvidia Jetson 4GB is enought to load the model and run inference, and at the same time run a GUI and display the image on the screen?
Or would you rather go for a Jetson Xavier NX?
Or would you rather set up the Jetson Nano 4GB or Xavier NX as a “server” that only generates the images, then run a RPi or something else behind the screen that fetches the images from the Jetson?
We don’t have a benchmark result for StyleGAN.
But if you want to reach a better framerate, it’s recommended to use XavierNX for flexibility.
Here are some benchmark score for your reference:
Jetson is used to deploy a wide range of popular DNN models and ML frameworks to the edge with high performance inferencing, for tasks like real-time classification and object detection, pose estimation, semantic segmentation, and natural language...
For example, ResNet50 can reach
47 and 1100 fps on Nano and XavierNX, separately.
Hi and thanks for the answer!
Framerate is not that important in this project, as the artwork won’t change that often. What I’m wondering is if you think this quite large model will fit the 4 GB memory?
The model is ~3GB and the total memory on the Jetson Nano is 4 GB, and I guess there’s some overhead of the OS etc.
If the model size is already ~ 3GB, it’s quite possible that the required memory will over 4GB.
Since it will need some extra memory for workspace, intermediate data,… when inference.
And as you mentioned, some for OS system.