Best way to implement custom models that are not Detection or Segmentation

Description

So I am trying to implement ultra fast lane detection v2 in my deepstream application. The ‘nvinfer’ plugin only supports detection and segmentation models so I cannot use that for this purpose. The steps I can think of would involve,

  • A probe to get the relevant frame and associated metadata
  • Do inference on the frame using TensorRT optimized engine of ultra fast lane detection v2
  • Attach the inference data to NvDsBatchMeta using NvDsUserMeta
  • Use another probe to set displaymeta to visualize lanes

Now, the issue is, this sounds very over the top and I am hoping there is a better way. Am I missing something? any help would be appreciated.

Environment

TensorRT Version: 8.4.15
GPU Type: RTX 3080
CUDA Version: 11.7
CUDNN Version: 8.7.0
Operating System + Version: Ubuntu 20.04
Deepstream Version: 6.1.1

Hi,

This looks more related to Deepstream, we are moving this post to the Deepstream forum to get better help.

Thank you.

1 Like

You are in the right way, the model needs specific post-processing and the output is special too, you need the customization mentioned by you.

1 Like

Thank you for prompt response Fiona. I will open a new question in case something comes up but I am good for now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.