gstreamer and tensorflow can't run simultaneously, GPU RAM issue ?

My TX2 gather picture from many cameras and make IA computation using tensorflow.

When I ran process separatly everything is ok but when I run them simultaneously the process works many minutes and crashed.

The first process that’s stopping is gstreamer with assertion errors.

(XXX:3627): GStreamer-CRITICAL **: gst_allocator_alloc: assertion '((params->align + 1) & params->align) == 0' failed

(XXX:3627): GStreamer-CRITICAL **: gst_memory_map: assertion 'mem != NULL' failed

(XXX:3627): GStreamer-CRITICAL **: gst_allocator_free: assertion 'memory != NULL' failed

I suspect a GPU RAM problem because gstream could works many hours without any problem when it is alone on the board.

With htop the amount of RAM goes up to 70% of the whole RAM.

The first try I have done is to set the parameter “per_process_gpu_memory_fraction” to 0.1 which limits the amount of GPU RAM taken by tensorflow.

It allows me to have a 5 minutes running software (without it can live only many seconds).

The second try I have done is to add a 8Gb swap memory and it has no relevant effect.

I don’t know which direction I must take to investigate.
Is it possible to properly split gpu memory ?
Is it a gstreamer issue ?
Any other idea ?

Hi,
Do you execute jetson_clocks.sh to run GPU at max frequency?

Hello,
No I havn’t run the jetson_clocks.sh tool.

I’m currently looking for nvidia docker to separate my two processes into two different docker images. Is it possible to limit the GPU ram amount for each docker image ?

Not sure if it’s doable due to no similar case and never try that before.

Thanks