Deploying own tensorflow model with Clara Deploy

I have an existing tensorflow 3D Convolutional neural network model classifier that receives dicom as input.

Will I be able to deploy the model using Clara? If so, how would I go about doing this?
Where would be a good place to learn and start?

Welcome to the forums, and thanks for your interest in Clara Deploy!

We have a series on Medium that walks through the end-to-end process of developing a model with Clara Train and deploying with Clara Deploy with integration with a PACS.

This focuses on Train to Deploy, but a similar process can be followed with an existing model. One note - this series uses an older base inference app that has been deprecated in favor of a v2 base inference app. You can find that here:
https://ngc.nvidia.com/containers/ea-nvidia-clara:clara:app_base_inference_v2

The Overview section of the base inference app outlines the overall structure and describes integration with Triton inference server, so this is also a good resource to get started with a custom inference app on Deploy.

Thanks,
Kris