Unable to release host memory

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I have this pipeline built in python:

    |-> [filesrc] -> [qtdemux] -> [h264parse] -> [nvv4l2decoder] -> [src_pad]->
    |-> [filesrc] -> [qtdemux] -> [h264parse] -> [nvv4l2decoder] -> [src_pad]->
    |-> [streammux] -> [pgie] -> [analytics] -> [tiler] -> [nvvidconv] -> [nvosd] -> [tee] ->
    |-> [queue] -> [fake_sink]
    |-> [queue] -> [nvvidconv_postosd] -> [caps] -> [encoder] -> [rtppay] -> [rtsp_sink]
    |-> [queue] -> [display_egl]
    |-> [MsgBroker off]

When the analytics module is enabled, I get a ton of Unable to release host memory messages while the pipeline runs correctly at first glance.
I would like to know where is this coming from and why, so I can solve it, if I’m doing something wrong.


I got same problem with deepstream-triton python backend

1 Like

Please read the first line of the document. Gst-nvdsanalytics — DeepStream 6.0 Release documentation

" This plugin performs analytics on metadata attached by nvinfer (primary detector) and nvtracker ."

Please add nvtracker according to the analytics sample: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-nvdsanalytics-test

The user manual is very important.

1 Like

Hi @Fiona.Chen and thanks for the previous answer.

I have added the nvtracker plugin. I still have the mentioned error from the first post, and I can’t find evidence that NvTracker is working as expected in my App.

My pipeline looks like this:

input sources/// → streammux → queue1 → pgie → tracker → analytics → tiler → queue2 → nvvidconv → queue3 → nvosd → tee → ///output plugins

Attached to the pipeline :
Pgie.src probe: Process neural network output data, parse bboxes, and insert them as NvDsObjectMeta in batch meta. It’s strongly based on this example: deepstream-ssd-parser. The metadata is present because I can see and parse the bounding boxes, confidence, labels, etc.
Analytics.src probe: Read the output from analytics plugin and current objects, parse and insert metadata for msgbroker / msgconv. Here I print the objects tracked id, with value 0xffffffff meaning untracked.

At this point I expected that NvDsObjectMeta.object_id has a valid number.

Trying to debug the problem I read DS_plugin_gst-nvtracker and new-metadata-fields.

One possible question is: in this version of DeepStream (6.0) how should be processed/stored the object detection metadata in order to be correctly processed by the NvTracker?
In other words, which of these fields in NvDsObjectMeta is read by NvTracker? detector_bbox_info or rect_params?


After some research and testing, I’ve found that my problem wasn’t related no analytics or tracker.
It was related to adding long texts in the display meta.
It seems that more than 20 characters long or so, creates memory problems, and that message is displayed.
At least this is my conclusion at this point. I’m closing this.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.