Hardware:
- NVIDIA Jetson Nano Developer Kit with JetPack 4.6.1 on MicroSD Card
- Raspberry Pi Camera v2
I am attempting to write a Python program that achieves the following:
- Perform live object detection using YOLOv7
- When a car or truck is detected with 80% confidence, suspend the Jetson Nano with
sudo systemctl suspend
- Manually wake up the Jetson by grounding the PWR BTN pin
- Continue object detection from step 1
I am currently getting errors with the GStreamer Pipeline after I wake up the Jetson Nano from sleep mode, and the program continues. Below is a snippet of what I have so far. The program runs on the run() program inside the inferThread class which calls the infer function inside the YoLov7TRT class.
class YoLov7TRT(object):
...
def infer(self, image, cap):
...
if obj in ['car','truck'] and score >= 0.8:
cap.release() # release the cv2.VideoCapture
process = subprocess.Popen(['sudo', 'systemctl', 'suspend'])
process.wait()
newCap = True
return image_raw, end - start, num_of_objects, newCap
def gstreamer_pipeline(
camera_id,
capture_width=1280,
capture_height=720,
display_width=512,
display_height=288,
framerate=60,
flip_method=0,
):
return (
"nvarguscamerasrc sensor-id=%d ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=I420 ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink max-buffers=1 drop=true"
% (
camera_id,
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)
class inferThread(threading.Thread):
def __init__(self, yolov7_wrapper):
threading.Thread.__init__(self)
self.yolov7_wrapper = yolov7_wrapper
self.cap = cv2.VideoCapture(gstreamer_pipeline(camera_id=0, flip_method=2), cv2.CAP_GSTREAMER)
# Check if the video file was successfully loaded
if not self.cap.isOpened():
print("Error opening video file")
def run(self):
while True:
status, frame = self.cap.read()
if status:
img = cv2.resize(frame, (512, 288))
result, use_time, number_of_objects, newCap = self.yolov7_wrapper.infer(img, self.cap)
if newCap: # need to initialize a new VideoCapture after waking from suspend
self.cap = cv2.VideoCapture(gstreamer_pipeline(camera_id=0, flip_method=2), cv2.CAP_GSTREAMER)
if not self.cap.isOpened():
print("Error starting video capture")
# Here the program goes back to the 'while True' statement
else:
...
I am getting errors along the following lines after the suspend command is called and the run() function continues:
# cap.release()
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
# sudo systemctl suspend
[sudo] password for user:
# After Jetson Nano is woken up from suspend
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
...
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:1] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
CONSUMER: ERROR OCCURRED
[ WARN:1] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (488) isPipelinePlaying OpenCV | GStreamer warning: unable to query pipeline state
[ WARN:1] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module nvarguscamerasrc1 reported: TIMEOUT
GST_ARGUS: Cleaning up
What is going wrong here? Are there alternatives or changes I can make to make this possible?
An alternative I thought of would be to stream the video feed from a separate program and subscribe to that stream inside of the Python program above, but I do not know how to do so.
Any help is greatly appreciated.