Problems trying to stream a video with Jetson Nano

Hello comunity,
I’m developer a project with the proposite is identify a object into video. My sistem recive a video as input, and next with a neural network (model Yolo3) I’m detect the object in images.
As output of my sistem I’m would like do a streaming thats show the video with the object detected
I’m created a file “streamer.py” where i have a class responsible to recive the datas needed to make the streamer work. Into this class I have a variable that recive cv2.VideoWriter

** my pipeline complete: “appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! omxh264enc ! h264parse ! rtph264pay pt=96 config-interval=1 ! udpsink host=”+self.host+" port=“+str(self.port)+” async=false"

I have a problem
I’m having a problem right at the beginning of my program, when I try to open my streamer “self.writer_stream.isOpened()”, I recive as return “False”. With that I can not give segment with the attempt to reproduce the result of the video with the detection of the objects.
Can someone help me?

Hi,
From the code, it is not clear what your source is(Bayer sensor camera, USB camera, or IP camera). There should have one cv2.VideoCapture() and one cv2.VideoWriter() like this sample:
Displaying to the screen with OpenCV and GStreamer - #9 by DaneLLL

Please share information about the input source so that we can suggest next.

And we have demonstration of Yolo models in DeepStream SDK. You may also take a look and give it a try.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.