GPU memory leak using detectnet

Hi, I trained an Object Detection model in DIGITS and load by detectnet-console.py from jetson-inference and everything work well. Then, I use APIs from detectnet-console.py to setup an inferece server:

class DeskV1:
       def __init__(self, model, proto, label):
           self.model = model
           self.proto = proto
           self.label = label

       def BuildNet(self):
           self.net = jetson.inference.detectNet(self.model, [f'--model={self.model}', f'--prototxt={self.proto}', f'--labels={self.label}'],    0.5)

       def Inference(self, image):
           img, width, height = jetson.utils.loadImageRGBA(image)
           detections = self.net.Detect(img, width, height, overlay="box,labels")
           centers = []
           for detection in detections:
               centers.append(detection.Center)
           size = (width, height)
           del img
           return size, centers

I suppose create DeskV1 instance once and call “Inference” everytime the server received new image. But everytime “Inference” work done, the memory increase 0.2G, so some time later the server crash.
How to free memory after Inference invoked? detectnet-camera.py setup network once and do inference everytime when camera captures image, but it never leaks memroy, why?

Hi,

Guess that the leakage is from jetson.utils.loadImageRGBA.
Since it looks like you didn’t delete the CUDA memory created by the PyImageIO.cpp here:

Could you modify your Inference class to confirm this for us first?

Thanks.

How to do that in python? Does “del img” free the memory? Thanks.

Hi,

Is it possible to modify the Inference class into following and check for leakage?

       def Inference(self, image):
           img, width, height = jetson.utils.loadImageRGBA(image)
           centers = []
           return size, centers

Thanks.

It’s my mistake to build net many times. Inference works well. Sorry for making troubles.

Good to know it works now.
Thanks for updating this with us.