followed instruction here: GitHub - chitoku/stable-diffusion, but it mentioned only the GPU is used for the inferencing, is there a way we can run Stable Diffusion on more power efficient DLA?
Hi,
Just give it a check.
The repo uses PyTorch for inference rather than TensorRT.
Currently, you will need to use TensorRT to place a task to DLA.
Thanks.
I use to convert PyTorch model to ONNX first, then use Trtexec to convert to TrensorRT engine and do the inferencing…, are you suggesting to do the same thing here for Stable Diffusion also?
Hi,
Yes, PyTorch doesn’t support deploying a task on DLA.
Please use TensorRT API instead.
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.