Is it possible to use deepstream with my own AI model, especially for non-YOLO or object detection related models with OTA from my cloud?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson NX

• DeepStream Version
5.1

• JetPack Version (valid for Jetson only)
4.5-b129

• TensorRT Version
7.1.3

• Issue Type( questions, new requirements, bugs)
As title mentioned, Is it possible to use deepstream with my own AI model, especially for non-YOLO or object detection related models with OTA from my cloud?
For example, I have my own AI model, and it is not object detection like model, also it store on my cloud.

  1. Could I use my own model?
  2. Could that be possible to update my model from my cloud?
  3. How can I update model from NVIDIA cloud? Where is the location storing NVIDIA’s pre-trained models?

Hi a0975003518,

Sorry for the late response, is this still an issue to support?

Thanks

Hi,

Please check following for the suggestion:

  1. Deepstream can support custom model.
    But please make sure all the layers in your model are supported by TensorRT first:
    Support Matrix :: NVIDIA Deep Learning TensorRT Documentation

  2. Suppose yes. Please check below for our OTA feature:
    DeepStream Reference Application - deepstream-test5 app — DeepStream 6.1.1 Release documentation

  3. Some Deepstream model can be found in the package directly:
    /opt/nvidia/deepstream/deepstream-5.1/samples/models/
    You can find more on our NGC cloud: AI Models - Computer Vision, Conversational AI, and More | NVIDIA NGC

Thanks.

Hi @kayccc , yes. Still not getting any response on this issue.

Hi @AastaLLL ,

Thanks for your response.
What I want to do is using my own cloud (Not NGC) to complete OTA process. Is that possible? How to do this? (Not update from local directory on the same machine, like NX, but from remote cloud)
For deepstream-test5-app’s document is less words to understand about this issue… can I have any more information about this issue?

Hi,

Here is the details for OTA update:

Steps to test the OTA functionality
   1) Run deepstream-test5-app with -o <ota_override_file> option
   2) While DS application is running, update the <ota_override_file> with new model details
      and save it
   3) File content changes gets detected by deepstream-test5-app and then it starts
      model-update process

You can find this information in the README of deepstream-test5.
Thanks.

Hi @AastaLLL,

Thanks for the information.

But what I want to do is OTA update from “”“remote Cloud (Not NGC)”“”, not from local directory on the same machine, NX.

Is that any documents about OTA update from “”“remote Cloud (Not NGC)”“”?

Edward

Hi,

Suppose you can implement a simple app that update the local model automatically once it is refreshed.
We don’t provide the tool since the API changes from the different cloud is used.

Thanks.

Hi @AastaLLL ,

Would that be possible to use triton server, which would use models from model repository, so set model repository connect to remote cloud?

Hi,

Could you share more information about your use case with us?
Do you want to use NX for inference (either TensorRT or Triton server).

If yes, you will need to download the model from cloud to NX for deploying.
So you can add a simple script to make sure these two models (NX & cloud) are synchronized.

Thanks.