Hello,
i’m using mediapipe on the Jetson NX, and it needs to be fed images this way:
auto input_frame = absl::make_uniquemediapipe::ImageFrame(
mediapipe::ImageFormat::SRGBA, camera_frame_raw.cols, camera_frame_raw.rows,
mediapipe::ImageFrame::kGlDefaultAlignmentBoundary);
cv::Mat input_frame_mat = mediapipe::formats::MatView(input_frame.get());
cv::cvtColor(camera_frame_raw, input_frame_mat, cv::COLOR_BGR2RGBA);
I’m using this gstreamer pipeline:
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw,format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
And of course i can capture >> camera_frame_raw, visualize it and send it in mediapipe after cvtColor to RGBA
Now, to increase framerate and efficiency, i would like to get rid of ! videoconvert ! because it’s a big frame and the operation is done on the CPU .
This is the pipeline i’m trying to use:
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw,format=(string)I420 ! appsink
I also tried using nvvideoconvert ( RGBA / I420 ) with the right cvtColor conversion applied before feeding it into mediapipe, but, although i can visualize it , it seems mediapipe would not accept it even tough the final color space is the same ( using cv::COLOR_YUV2BGRA_I420 ) ( i also tried YUV->BGR->RGBA )
What am i doing wrong?
Thanks