Hi Forum,
This is my first post here, so pardon me if i have any mistakes in categories etc.
I am facing a problem while trying to make detectnet-camera apply the model onto a h264 rtsp ip camera stream.
Error code is as follows:
jetson.utils – PyCamera_New()
jetson.utils – PyCamera_Init()
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera::Create('rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ') – invalid camera device requested
jetson.utils – PyCamera_Dealloc()
Traceback (most recent call last):
File “my-detection.py”, line 5, in
camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")
Exception: jetson.utils – failed to create gstCamera device
PyTensorNet_Dealloc()
gstCamera.cpp (17.5 KB)
My detectnet-camera.py is:
import jetson.inference
import jetson.utilsnet = jetson.inference.detectNet(“ssd-mobilenet-v2”, threshold=0.5)
camera = jetson.utils.gstCamera(640, 480, "rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 latency=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=480 ")
display = jetson.utils.glDisplay()while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))
My IP Camera is currently connected through WiFi and my jetson nano is able to ping the IP of the camera and I ran the gst-launch 1.0 pipeline and is working fine
gst-launch-1.0 rtspsrc location=rtsp://admin:88888888@192.168.1.146:10554/udp/av0_0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx, width=1280, height=720 ! videoconvert ! xvimagesink
I am trying to pass the video stream and convert it to a format that jetson-inference accepts and apply the model but am always facing this error.
Thanks,
amosang76