• Hardware Platform (Jetson / GPU) : 2080Ti
• DeepStream Version 4.0 (docker)
I tried to save output metadata using “infer-raw-output-dir=./metadata/”. As per my understanding this data contains some information about final layer of Neural Network. In “metadata” directory some files are saved in .bin format which is not human readable. Is there any other format to save metadata so that I can understand the final layer output?
In the documentation there are various message conversion and brokering elements that may suit your purpose or you can write you own plugin:
Gst-nvmsgconv works with NVDS_EVENT_MSG_META . But I want NvDsInferTensorMeta in a JSON or CSV file. Can we do that?
Hi,
you can refer to code sources/apps/sample_apps/deepstream-infer-tensor-meta-test/deepstream_infer_tensor_meta_test.cpp::pgie_pad_buffer_probe to see how extract NvDsInferTensorMeta and attach to each frame metadata, then you can based on your needs do some adjustment accordingly.
Implementing compute functionality inside probe is not advisible as it is a blocking call.
another method, modify in the nvinfer plugin, prefer this way, sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp
You need to have a separate thread running to handle I/O if you want to write to a file or network. Otherwise as @Amycao implies, you can block your whole pipeline. Rather than try to write that yourself, it might be easier to use or modify Nvidia’s message broker. It’s open source and available under sources/gst-plugins. Here is the 4.0 version:
The “start” vmethod create a worker thread which brokers metadata without blocking. “stop” triggers it’s stop. DeepStream 5.0’s version may be updated but I haven’t checked it out. You should check out Amy’s suggestion first. In any case, you should be able to modify that to send any metadata anywhere.