Description
So I am trying to implement ultra fast lane detection v2 in my deepstream application. The ‘nvinfer’ plugin only supports detection and segmentation models so I cannot use that for this purpose. The steps I can think of would involve,
- A probe to get the relevant frame and associated metadata
- Do inference on the frame using TensorRT optimized engine of ultra fast lane detection v2
- Attach the inference data to NvDsBatchMeta using NvDsUserMeta
- Use another probe to set displaymeta to visualize lanes
Now, the issue is, this sounds very over the top and I am hoping there is a better way. Am I missing something? any help would be appreciated.
Environment
TensorRT Version: 8.4.15
GPU Type: RTX 3080
CUDA Version: 11.7
CUDNN Version: 8.7.0
Operating System + Version: Ubuntu 20.04
Deepstream Version: 6.1.1