tesla v100 and test out inference - please give me input

I want to attach camera to tesla v100
export a trained model from tenforflow using keras to the tensorRT inference engine
I want to do a realtime prediction.
from the given doc I am not sure how to approach this case.
I have downloaded the deepstream sdk ngc container, I know there are samples like
deepstream-test1
I ran something as follows
"
./deepstream-test1-app /root/deepstream_sdk_v4.0.1_x86_64/samples/streams/sample_1080p_h264.mp4
Running…"

I dont understand the flow, where can I find more information on this,
please guide me more, I need more information to run this.
can we run jetson samples in tesla v100?
I need a sample that shows me what to do end to end. Not looking into writing code at this time,
we want to know how this will work using tesla v100 or t4 , more looking into performance etc.

You can get doc from here: [url]NVIDIA Metropolis Documentation
→ “DeepStream Plugin Manual” has more details.

Deepstream 4.0 has been unified in Jetson and Tesla platform.
Please download telsa package to you v100 platform. [url]https://developer.nvidia.com/deepstream-download[/url]
You can run deepstream-test1-app in your v100

Deepstream 4.0 needs C/C++ background.
We will provide python binding in later release.

Thank you Chris, let me check these details further. Does this mean any trained model using python will not work here?

Only TRT compatible models will work.

Thank you