I want to use deepstream-app to get the network rtsp stream to get tracked and detected vehicle position information, and get a snapshot picture of each car.
What should i do, can you help me, thank you
All inference result is saved in metadata -> https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_metadata.03.1.html
You can refer to test1 “osd_sink_pad_buffer_probe()”
Thank you for your quick reply:
Then I am using the yolo interface and the nvinfer plug-in, and I want to extract the vehicle pictures based on the position of the vehicle and the width and height of the vehicle, and use the yolo interface to detect and identify the sports car lights to determine whether the lights are on. So does the infer plugin provide this capability, and what would I do without it?
rtsp stream -> yolo vehicle detection -> yolo sports car lights detection -> determine whether the lights are on
We have back-to-back detector sample, can you refer to ? It should be OK for yolo detection.
Thank you for your reply, I’m glad that back-to-back can be applied, but I have problems in the process, can you check it for me? Thank youhttps://devtalk.nvidia.com/default/topic/1068909/deepstream-sdk/detector1-gt-cropped-images-gt-detector-2-application-cascading-in-the-latest-back-to-back/post/5414594/?offset=4#5414712