Triton and X Server on same GPU

Description

I have a question on compatibility between TritonServer and X Server on Linux.
I want to share my GPU to both do an inference using the tritonserver, as well as connect the same to the display monitor to work on the UI that will engage the DL model for inference.

I have a working version, but I am looking for any corner cases. Will there be any issue with X Server and Tritonserver working on the same GPU?

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: Linux
Python Version (if applicable):
TensorFlow Version (if applicable): TensorFlow 2
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi @darshanp20 ,
Would suggest to reach out to Issues · triton-inference-server/server · GitHub

Thanks