Object Detection using detectron2 is getting KILLED

Dear Sir,

I am using Detectron2 for creating object detection on Jetson Nano, but on running predictor, the program is getting killed abruptly.

I am using Jetpack 4.3

My code is mentioned below, it it getting killed at last line


import torch, torchvision
print(torch.version, torch.cuda.is_available())
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import numpy as np
import cv2
import random
import matplotlib.pyplot as plt
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
video = “test_2.mp4”
cap = cv2.VideoCapture(video)
cnt=0
if (cap.isOpened()== False):
print(“Error opening video stream or file”)
ret,first_frame = cap.read()
#Read until video is completed
while(cap.isOpened()):

Capture frame-by-frame

ret, frame = cap.read()
if ret == True:
#save each frame to folder
cv2.imwrite(‘frames/’+str(cnt)+‘.png’, frame)
cnt=cnt+1
if(cnt==750):
break

Break the loop

else:
break
FPS=cap.get(cv2.CAP_PROP_FPS)
print(FPS)
cfg = get_cfg()

add project-specific config (e.g., TensorMask) here if you’re not running a model in detectron2’s core library

cfg.merge_from_file(model_zoo.get_config_file(“COCO-Detection/faster_rcnn_R_50_C4_3x.yaml”))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.9 # set threshold for this model
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(“COCO-Detection/faster_rcnn_R_50_C4_3x.yaml”)
predictor = DefaultPredictor(cfg)
img = cv2.imread(“frames/30.png”)
#pass to the model
outputs = predictor(img)


Hi,

Usually, killed error is caused by the out of memory issue.
Could you monitor the system at the same to get the memory status?

$ sudo tegrastats

Thanks.