Hey I’m using a Jetson Nano with a Raspberry Pi camera and running code similar to Donkey Car, so the image size is supposed to be 160x120.
If I initialize the camera with
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3280, height=2464, format=(string)NV12, framerate=(fraction)21/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)160, height=(int)120, format=(string)BGRx ! videoconvert ! appsink
I get
NvDdkVicConfigure Failed
nvbuffer_transform Failed
gst_nvvconv_transform: NvBufferTransform Failed
[ WARN:0] global /home/jetbot/Downloads/opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module nvarguscamerasrc0 reported: Internal data stream error.
but (notice the ONLY difference is the output dimensions)
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3280, height=2464, format=(string)NV12, framerate=(fraction)21/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)224, height=(int)224, format=(string)BGRx ! videoconvert ! appsink
generates no errors
my best guess is that there’s a minimum image size for the output stream? what should I use? and what is the most efficient way of getting a 160x120 image? I’ve got no problems using cv2 to do a rescale but just wondering if I can do something within gstreamer