Decode KLV in Deepstream pipeline

This is a follow up post to this topic: Decode KLV (Key Length Value) in Deepstream pipeline?

As @superelectron mentioned in that post, having access to the klv data in a transport stream would be fantastic.

The motivation for KLV data is that it contains information about the camera that captured the video such as location and even orientation, which allows you to do things like estimate geo-location of objects that are detected in a given frame.

Typically in the streams I’ve worked on that have this (generally a transport stream) there are two elemental streams–one for the video and one for the klv data). Using ffmpeg, it’s pretty straight forward to parse packets which can be for frames or klv packets, decode them and then match them up by presentation timestamps. Just trying now to figure out how to do with DeepStream / gstreamer (I’m kind of new to both).

There is no KLV support in Gstreamer community now. Maybe you can refer to some 3rd party. E.G. GStreamer In-Band Metadata | In-band metadata | RidgeRun

Hi @Fiona.Chen, I have a little experience with gstreamer. We have a gstreamer pipeline now that can demux a video stream (either recorded .ts or live udp stream)…off the top of my head without looking I believe it uses a tsdemux element and a combination of caps filters and queues to split the stream into a video and data stream. I can easily parse / decode the klv bytes once I get them. At the moment I’m just getting started with Deepstream and since it’s based on gstreamer, naively I would assume there’s somewhere I can insert that same tsdemux element in to get at the data stream and send it to an app sink. Just not sure where that is at the moment.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

DeepStream is just a SDK with some Nvidia HW accelerated gstreamer plugins and sample pipelines.

I think what you need is just to use your “a tsdemux element and a combination of caps filters and queues to split the stream into a video and data stream.” as the sources, the other parts you can refer to any DeepStream samples.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.