Tokkio ds-visionai customization

Hi,

We are working on developing a prototype based on the tokkio workflow. We managed to customize a bunch on the services deployed already but are hitting a wall when trying to change the behavior of the ds-visionai component.

We extracted the graph, manifest and libraries from the ds-visionai container in order to understand how it is built, and tried to recreate the graph in graph composer using the deepstream-test5, deepstream-runtime-src-add-del, and some of the tao sample graphs.
As a side note here, we experienced a lot of issues/instabilities with deepstream 6.3 but managed to get something working using deepstream 7.0 instead (nvcr.io/nvidia/deepstream:7.0-gc-triton-devel) and the tao5.3_ds7.0ga branch of deepstream_tao_apps.

We managed to get a container built from the composer which can receive stream and inject additional analytics in the output messages but we are struggling to find how to handle the message payload when adding rtsp streams to the graph. Right now when deploying the graph by modifying the container used by the ds-visionai service in the tokkio helm chart it goes to running state properly but crashes when someone accesses the ui because the rtsp stream url is not properly interpreted by the graph and a stream with an empty url is added to the pipeline making it go in a crashloop.

In the tokkio workflow prebuilt services, the event available in the redis timeseries is shaped like this which corresponds to the latest update to the tokkio workflow documentation:

{
  "alert_type": "camera_status_change",
  "created_at": "2024-12-09T13:45:44Z",
  "event":
    {
      "camera_id": ...,
      "camera_name": ...,
      "camera_url": ...,
      "change": "camera_streaming",
      "metadata":
        { "codec": "h264", "framerate": 30, "resolution": "1280x720" },
    },
  "source": "vst",
}

When building new graphs from the samples present in deepstream, the payload used is a bit different and contains less information:

{
"sensor": { 
  "id": ...,
  "uri": ...
} 
}

By grepping the containers contents, we found that the only reference to a camera_url string is in one of the .so files used by the graph, which lead us to believe that maybe a custom version of one of the components is used in the ds-visionai graphs rather than the base blocks provided in the deepstream samples.

I do not have extensive experience using the graph composer but from my understanding the NvDsMultiSrcConnection component is receiving an action from the HttpServer component and extracts the stream url to add it to the pipeline.

Here is my question:

  • Is there a way to access the source graph and container_builder files used for building the ds-visionai container ? We are using tokkio 24.08 right now which contains one graph for the bulk of the vision processing and one graph used as a controller (we digged in 24.10 but the structure and models used vastly differs)
  • If the sources to the ds-visionai, where can we find guidance on how to setup the payload scheme between the redis-timeseries and vision graph ?

Thank you.

Hi Thomas,

Thank you for your interest in ds-visionAI.
Could you please elaborate a little more on what you want to achieve?
In 24.10, ds-visionAI supports a bodyPose model and eMDX does head pose estimation using the metadata provided by VisionAI. You’ll find more details here and here
Would this help cover what you want to achieve out of the box?

You can find the source of the GXF app in the ds-visionai container under /workspace/ds-as-a-service-facedetect

The schema of the event sent from visionAI to Redis is available here

Thank you,
Guilhem

Hi,

Thank you for your answer.
We are trying to include more models in the graph, right now the emotion detection model provided with the deepstream tao apps. The 24.08 version is fine with us since we are mainly focusing on the face and not the body yet.
When running the original docker container locally, we can add rtsp stream sources by sending a message like this one:

{
“alert_type”: “camera_status_change”,
“created_at”: “2024-12-03T03:43:31Z”,
“event”: {
“camera_id”: “0”,
“camera_name”: “webcam_123”,
“camera_url”: “file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4”,
“change”: “camera_streaming”
},
“source”: “vst”
}

But as soon as we get the graphs out of the container and run them with graph composer without any modifications, the same message will raise an error saying that no uri was provided for the stream. However sending this message works fine:

{
“sensor”: {
“id”: “3”,
“uri”: “file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4”,

}

}

Another issue we are facing is that when we try to build the container using the container_builder tool (or through the graph composer UI) there is a registry error raised saying that NvDsSourceExt 1.3.5 can’t be found with the 24.08 version.

This leads us to think that:

  • The .so files included in /workspace/ds-as-a-service-facedetect/{deepstream,gxf} and referenced in the manifest are not considered when running the graph from the composer or trying to build a new container from the same graph even though they’re present in the container.
  • Some configuration file might exists somewhere describing this message and is used at build time.
  • The deepstream-server sample present in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_server is used as one of the components in the containers but the default corresponding component provided in the graph composer through the ngc-public repo is something else entirely (but what ?)

Would it be possible to provide some guidance on how we can achieve our goal by either recreating an entirely new graph or updating the ones extracted from any of those two containers ?
Also, if we want to update and modify the eMDX algorithms (which will probably be a question coming soon), is there a github repo somewhere with sources/build/modification instructions ?

Thank you,

Thomas.

Hi,
Any news on that front ?

Thank you,

Thomas.