Maxine denoiser model in triton server

Hello,

I am currently using the maxine sdk because we see great denoising performances out of it.

My goal is to be able to set up the denoiser model directly in the triton server and not to have to use the effect_demo executable that loads and runs the model at each inference.

Do you know how I can achieve this? Could I get access to the tensortrt.plan or something similar
Any other ideas ?

Thanks for the help

Hi there jsoto1! Engineering has taken note, I will update if there is progress on Triton server.