How to host service with deepstream

Hi, deepstream is a pretty good inference framework for stream processing on edge device.
But how about deploying as cloud service?
For example, I have a GPU server which can process 10 streams with deepstream. How to deploy it as service which can accept task URIs from clients and write result to redis, mysql or es.
To clarify my questions:

  1. How to wrap deepstream-app into (web) service?
  2. How to schedule and manage concurrent requests with limited capacity?

I know this may not be the question about deepstream sdk itself. But it is a use case which requires instruction and sample.

• Hardware Platform (Jetson / GPU) T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version w/ offical deepstream docker
• NVIDIA GPU Driver Version (valid for GPU only) 450.80.02
• Issue Type( questions, new requirements, bugs) question

deepstream supports RTSP/RTP, can the clients send data to DeepStream running on server via RTSP/RTP?

DeepStream supports to receive multiple RTSP/RTP streams.

Thanks for your reply. I know that deepstream can process multi streams. But the sample is for fixed number of streams.
What if the number of incoming streams are fluctuating? Say the capacity of server is 10 streams. But the streams to process are in a pool which maybe in the range of [0, 20]. Streams in process may terminate itself or be replaced by upcoming ones.

To specify, my use case is client sending uri to server. And server should keep adding new uri until reach its max capacity.
So deepstream server should manage processing streams and negotiate the status with client to reject upcoming uri when get fullfilled.

DS supports to runtime add and remove source - deepstream_reference_apps/runtime_source_add_delete at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

Thanks mchi, hope this sample could help save time to switch uri. (currently I simply kill and launch process)