Hello there, I want to stream the object detection result frame using gstreamer in my Jetson Xavier, here’s my pipeline:
- capture frames from ip camera using opencv-python; √
- do the image preprocesing and infence it with mxnet; √
- draw the detected bbox on the origin frame; √
- stream these frames via gstreamer RTSP, using opencv. ×
- open vlc player to watch the real-time frames. ×
I don’t know how to implement step 4, I found some implementation but didn’t work for me:
gst_str_rtp = "appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=10.168.1.177 port=5000" ## do something out = cv2.VideoWriter(gst_str_rtp, 0, fps, (frame_width, frame_height), True) while: # 1. capture frame # 2. inference # 3. draw boxes out.write(frame)
And then open the VLC player, enter the address:
rtp://10.168.1.177:5000, but can not open it.
Why didn’t I try other frameworks or cpp or deepstream?
First, my mobilenet+yolov3 model was trained in MXNet, I don’t want to train it again with other frameworks.
Second, I know the deepstream example app can do the RTSP streaming, but I can’t deploy my MXNet model with deepstream. So I trained a mobilenet + ssd using tensorflow, attempt to deploy it using deepstream, but something blocked my road: ERROR: sample_uff_ssd: Fail to parse
Maybe I can try to write gstreamer pipeline in cpp, but there’s a lot work to do to translate my python inference code to cpp, so I’m seeking python version solution.