Can i deploy TTS models using C++ APIs for Triton inference server and Triton client using Python APIs

I built a Triton server for TTS (Tacotron2, waveglow) models using C++ APIs (DeepLearningExamples/PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp at master · NVIDIA/DeepLearningExamples · GitHub). For client side, I tried deploying Triton client C++ APIs, but I met some problems which I still don’t know how to solve. So I want to ask if Python APIs can be used for Triton client when my Triton server is written in C++.
client/src/python at main · triton-inference-server/client · GitHub