Mediapipe

Here you may find one-liners for running mediapipe samples with jetson equipped with usb camera [ at dev/video0]. The setup was tested on jetpack 4.6.1 devkits:
Facemesh

export DISPLAY=:0
xhost +
docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --device /dev/video0:/dev/video0 iad.ocir.io/idso6d7wodhe/mediapipe:latest /bin/bash -c 'GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/face_mesh/face_mesh_gpu --calculator_graph_config_file=mediapipe/graphs/face_mesh/face_mesh_desktop_live_gpu.pbtxt'

Hand

export DISPLAY=:0 #or 1 in accordance with your environment
xhost +
docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --device /dev/video0:/dev/video0 iad.ocir.io/idso6d7wodhe/mediapipe:latest /bin/bash -c 'GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hand_tracking/hand_tracking_gpu --calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_desktop_live_gpu.pbtxt'

Iris

 export DISPLAY=:0
 xhost +
docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --device /dev/video0:/dev/video0  iad.ocir.io/idso6d7wodhe/mediapipe:latest /bin/bash -c 'GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_gpu --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_gpu.pbtxt'

Pose

 export DISPLAY=:0
 xhost +
docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --device /dev/video0:/dev/video0  iad.ocir.io/idso6d7wodhe/mediapipe:latest /bin/bash -c 'GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/pose_tracking/pose_tracking_gpu --calculator_graph_config_file=mediapipe/graphs/pose_tracking/pose_tracking_gpu.pbtxt'

face-detection

export DISPLAY=:0
  xhost +
 docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --device /dev/video0:/dev/video0  iad.ocir.io/idso6d7wodhe/mediapipe:latest /bin/bash -c 'GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/face_detection/face_detection_gpu --calculator_graph_config_file=mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxt

Hi,

It looks like you will need to setup the required backend frameworks for the GPU mode.
https://github.com/google/mediapipe/blob/master/mediapipe/docs/gpu.md

Thanks.

@AastaLLL, thank you for pointing out!
It seems that I can run the same example with GPU support

$ export GLOG_logtostderr=1
nvidia@nvidia-desktop:~/mediapipe$ bazel run --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11     mediapipe/examples/desktop/hello_world:hello_world
DEBUG: /home/nvidia/.cache/bazel/_bazel_nvidia/ff4425722229fc486cc849b5677abe3f/external/bazel_skylib/lib/versions.bzl:96:13: Current Bazel is not a release version; cannot check for compatibility. Make sure that you are running at least Bazel 2.0.0.
INFO: Analyzed target //mediapipe/examples/desktop/hello_world:hello_world (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //mediapipe/examples/desktop/hello_world:hello_world up-to-date:
  bazel-bin/mediapipe/examples/desktop/hello_world/hello_world
INFO: Elapsed time: 0.254s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/mediapipe/examples/desktop/hello_world/helINFO: Build completed successfully, 1 total action
I0427 06:47:40.512712 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.512884 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.512995 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.513118 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.513253 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.513382 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.513610 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.513726 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.513875 10259 hello_world.cc:56] Hello World!
I0427 06:47:40.514004 10259 hello_world.cc:56] Hello World!
nvidia@nvidia-desktop:~/mediapipe$ 

Good to know this. : )

 GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hand_tracking/hand_tracking_gpu     --calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_mobile.pbtxt     --input_video_path=/home/nvidia/Downloads/numbers_with_hans.mov --output_video_path=/home/nvidia/Downloads/output.mp4
I0430 06:24:57.966466 19110 demo_run_graph_main_gpu.cc:51] Get calculator graph config contents: # MediaPipe graph that performs hand tracking with TensorFlow Lite on GPU.
# Used in the examples in
# mediapipe/examples/android/src/java/com/mediapipe/apps/handtrackinggpu and
# mediapipe/examples/ios/handtrackinggpu.

# Images coming into and out of the graph.
input_stream: "input_video"
output_stream: "output_video"

# Throttles the images flowing downstream for flow control. It passes through
# the very first incoming image unaltered, and waits for downstream nodes
# (calculators and subgraphs) in the graph to finish their tasks before it
# passes through another image. All images that come in while waiting are
# dropped, limiting the number of in-flight images in most part of the graph to
# 1. This prevents the downstream nodes from queuing up incoming images and data
# excessively, which leads to increased latency and memory usage, unwanted in
# real-time mobile applications. It also eliminates unnecessarily computation,
# e.g., the output produced by a node may get dropped downstream if the
# subsequent nodes are still busy processing previous inputs.
node {
  calculator: "FlowLimiterCalculator"
  input_stream: "input_video"
  input_stream: "FINISHED:hand_rect"
  input_stream_info: {
    tag_index: "FINISHED"
    back_edge: true
  }
  output_stream: "throttled_input_video"
}

# Caches a hand-presence decision fed back from HandLandmarkSubgraph, and upon
# the arrival of the next input image sends out the cached decision with the
# timestamp replaced by that of the input image, essentially generating a packet
# that carries the previous hand-presence decision. Note that upon the arrival
# of the very first input image, an empty packet is sent out to jump start the
# feedback loop.
node {
  calculator: "PreviousLoopbackCalculator"
  input_stream: "MAIN:throttled_input_video"
  input_stream: "LOOP:hand_presence"
  input_stream_info: {
    tag_index: "LOOP"
    back_edge: true
  }
  output_stream: "PREV_LOOP:prev_hand_presence"
}

# Drops the incoming image if HandLandmarkSubgraph was able to identify hand
# presence in the previous image. Otherwise, passes the incoming image through
# to trigger a new round of hand detection in HandDetectionSubgraph.
node {
  calculator: "GateCalculator"
  input_stream: "throttled_input_video"
  input_stream: "DISALLOW:prev_hand_presence"
  output_stream: "hand_detection_input_video"

  node_options: {
    [type.googleapis.com/mediapipe.GateCalculatorOptions] {
      empty_packets_as_allow: true
    }
  }
}

# Subgraph that detections hands (see hand_detection_gpu.pbtxt).
node {
  calculator: "HandDetectionSubgraph"
  input_stream: "hand_detection_input_video"
  output_stream: "DETECTIONS:palm_detections"
  output_stream: "NORM_RECT:hand_rect_from_palm_detections"
}

# Subgraph that localizes hand landmarks (see hand_landmark_gpu.pbtxt).
node {
  calculator: "HandLandmarkSubgraph"
  input_stream: "IMAGE:throttled_input_video"
  input_stream: "NORM_RECT:hand_rect"
  output_stream: "LANDMARKS:hand_landmarks"
  output_stream: "NORM_RECT:hand_rect_from_landmarks"
  output_stream: "PRESENCE:hand_presence"
}

# Caches a hand rectangle fed back from HandLandmarkSubgraph, and upon the
# arrival of the next input image sends out the cached rectangle with the
# timestamp replaced by that of the input image, essentially generating a packet
# that carries the previous hand rectangle. Note that upon the arrival of the
# very first input image, an empty packet is sent out to jump start the
# feedback loop.
node {
  calculator: "PreviousLoopbackCalculator"
  input_stream: "MAIN:throttled_input_video"
  input_stream: "LOOP:hand_rect_from_landmarks"
  input_stream_info: {
    tag_index: "LOOP"
    back_edge: true
  }
  output_stream: "PREV_LOOP:prev_hand_rect_from_landmarks"
}

# Merges a stream of hand rectangles generated by HandDetectionSubgraph and that
# generated by HandLandmarkSubgraph into a single output stream by selecting
# between one of the two streams. The former is selected if the incoming packet
# is not empty, i.e., hand detection is performed on the current image by
# HandDetectionSubgraph (because HandLandmarkSubgraph could not identify hand
# presence in the previous image). Otherwise, the latter is selected, which is
# never empty because HandLandmarkSubgraphs processes all images (that went
# through FlowLimiterCaculator).
node {
  calculator: "MergeCalculator"
  input_stream: "hand_rect_from_palm_detections"
  input_stream: "prev_hand_rect_from_landmarks"
  output_stream: "hand_rect"
}

# Subgraph that renders annotations and overlays them on top of the input
# images (see renderer_gpu.pbtxt).
node {
  calculator: "RendererSubgraph"
  input_stream: "IMAGE:throttled_input_video"
  input_stream: "LANDMARKS:hand_landmarks"
  input_stream: "NORM_RECT:hand_rect"
  input_stream: "DETECTIONS:palm_detections"
  output_stream: "IMAGE:output_video"
}
I0430 06:24:57.968878 19110 demo_run_graph_main_gpu.cc:57] Initialize the calculator graph.
I0430 06:24:57.974925 19110 demo_run_graph_main_gpu.cc:61] Initialize the GPU.
I0430 06:24:57.998426 19110 gl_context_egl.cc:158] Successfully initialized EGL. Major : 1 Minor: 5
I0430 06:24:58.071439 19119 gl_context.cc:324] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 32.4.2)
I0430 06:24:58.071765 19110 demo_run_graph_main_gpu.cc:67] Initialize the camera or load the video.
I0430 06:24:58.122124 19110 demo_run_graph_main_gpu.cc:88] Start running the calculator graph.
I0430 06:24:58.122982 19110 demo_run_graph_main_gpu.cc:93] Start grabbing and processing frames.
INFO: Created TensorFlow Lite delegate for GPU.
I0430 06:24:58.996881 19110 demo_run_graph_main_gpu.cc:160] Prepare video writer.
I0430 06:25:24.138538 19110 demo_run_graph_main_gpu.cc:175] Shutting down.
I0430 06:25:25.532886 19110 demo_run_graph_main_gpu.cc:189] Success!
Segmentation fault (core dumped)

    can also confirm hand example working on video file

sorted out webcamera input
[ here as webcamera acts CSI AGX sensor via loopback to RGB at /dev/video2]


now the issue will be to omit the loopback and read directly the sensor at /dev/video0
however, code will need to be modified to read directly from nvargussrc

// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//      http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// An example of sending OpenCV webcam frames into a MediaPipe graph.
// This example requires a linux computer and a GPU with EGL support drivers.
#include <cstdlib>

#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/formats/image_frame.h"
#include "mediapipe/framework/formats/image_frame_opencv.h"
#include "mediapipe/framework/port/commandlineflags.h"
#include "mediapipe/framework/port/file_helpers.h"
#include "mediapipe/framework/port/opencv_highgui_inc.h"
#include "mediapipe/framework/port/opencv_imgproc_inc.h"
#include "mediapipe/framework/port/opencv_video_inc.h"
#include "mediapipe/framework/port/parse_text_proto.h"
#include "mediapipe/framework/port/status.h"
#include "mediapipe/gpu/gl_calculator_helper.h"
#include "mediapipe/gpu/gpu_buffer.h"
#include "mediapipe/gpu/gpu_shared_data_internal.h"

constexpr char kInputStream[] = "input_video";
constexpr char kOutputStream[] = "output_video";
constexpr char kWindowName[] = "MediaPipe";

DEFINE_string(
calculator_graph_config_file, "",
"Name of file containing text format CalculatorGraphConfig proto.");
DEFINE_string(input_video_path, "",
          "Full path of video to load. "
          "If not provided, attempt to use a webcam.");
DEFINE_string(output_video_path, "",
          "Full path of where to save result (.mp4 only). "
          "If not provided, show result in a window.");

::mediapipe::Status RunMPPGraph() {
  std::string calculator_graph_config_contents;
  MP_RETURN_IF_ERROR(mediapipe::file::GetContents(
  FLAGS_calculator_graph_config_file, &calculator_graph_config_contents));
  LOG(INFO) << "Get calculator graph config contents: "
        << calculator_graph_config_contents;
  mediapipe::CalculatorGraphConfig config =
  mediapipe::ParseTextProtoOrDie<mediapipe::CalculatorGraphConfig>(
      calculator_graph_config_contents);

  LOG(INFO) << "Initialize the calculator graph.";
  mediapipe::CalculatorGraph graph;
  MP_RETURN_IF_ERROR(graph.Initialize(config));

  LOG(INFO) << "Initialize the GPU.";
  ASSIGN_OR_RETURN(auto gpu_resources, mediapipe::GpuResources::Create());
  MP_RETURN_IF_ERROR(graph.SetGpuResources(std::move(gpu_resources)));
  mediapipe::GlCalculatorHelper gpu_helper;
  gpu_helper.InitializeForTest(graph.GetGpuResources().get());

  LOG(INFO) << "Initialize the camera or load the video.";
  cv::VideoCapture capture;
  const bool load_video = !FLAGS_input_video_path.empty();
  if (load_video) {
capture.open(FLAGS_input_video_path);
  } else {
capture.open(0);
  }
  RET_CHECK(capture.isOpened());

  cv::VideoWriter writer;
  const bool save_video = !FLAGS_output_video_path.empty();
  if (!save_video) {
cv::namedWindow(kWindowName, /*flags=WINDOW_AUTOSIZE*/ 1);
#if (CV_MAJOR_VERSION >= 3) && (CV_MINOR_VERSION >= 2)
capture.set(cv::CAP_PROP_FRAME_WIDTH, 640);
capture.set(cv::CAP_PROP_FRAME_HEIGHT, 480);
capture.set(cv::CAP_PROP_FPS, 30);
#endif
  }

  LOG(INFO) << "Start running the calculator graph.";
  ASSIGN_OR_RETURN(mediapipe::OutputStreamPoller poller,
               graph.AddOutputStreamPoller(kOutputStream));
  MP_RETURN_IF_ERROR(graph.StartRun({}));

  LOG(INFO) << "Start grabbing and processing frames.";
  bool grab_frames = true;
  while (grab_frames) {
// Capture opencv camera or video frame.
cv::Mat camera_frame_raw;
capture >> camera_frame_raw;
if (camera_frame_raw.empty()) break;  // End of video.
cv::Mat camera_frame;
cv::cvtColor(camera_frame_raw, camera_frame, cv::COLOR_BGR2RGB);
if (!load_video) {
  cv::flip(camera_frame, camera_frame, /*flipcode=HORIZONTAL*/ 1);
}

// Wrap Mat into an ImageFrame.
auto input_frame = absl::make_unique<mediapipe::ImageFrame>(
    mediapipe::ImageFormat::SRGB, camera_frame.cols, camera_frame.rows,
    mediapipe::ImageFrame::kGlDefaultAlignmentBoundary);
cv::Mat input_frame_mat = mediapipe::formats::MatView(input_frame.get());
camera_frame.copyTo(input_frame_mat);

// Prepare and add graph input packet.
size_t frame_timestamp_us =
    (double)cv::getTickCount() / (double)cv::getTickFrequency() * 1e6;
MP_RETURN_IF_ERROR(
    gpu_helper.RunInGlContext([&input_frame, &frame_timestamp_us, &graph,
                               &gpu_helper]() -> ::mediapipe::Status {
      // Convert ImageFrame to GpuBuffer.
      auto texture = gpu_helper.CreateSourceTexture(*input_frame.get());
      auto gpu_frame = texture.GetFrame<mediapipe::GpuBuffer>();
      glFlush();
      texture.Release();
      // Send GPU image packet into the graph.
      MP_RETURN_IF_ERROR(graph.AddPacketToInputStream(
          kInputStream, mediapipe::Adopt(gpu_frame.release())
                            .At(mediapipe::Timestamp(frame_timestamp_us))));
      return ::mediapipe::OkStatus();
    }));

// Get the graph result packet, or stop if that fails.
mediapipe::Packet packet;
if (!poller.Next(&packet)) break;
std::unique_ptr<mediapipe::ImageFrame> output_frame;

// Convert GpuBuffer to ImageFrame.
MP_RETURN_IF_ERROR(gpu_helper.RunInGlContext(
    [&packet, &output_frame, &gpu_helper]() -> ::mediapipe::Status {
      auto& gpu_frame = packet.Get<mediapipe::GpuBuffer>();
      auto texture = gpu_helper.CreateSourceTexture(gpu_frame);
      output_frame = absl::make_unique<mediapipe::ImageFrame>(
          mediapipe::ImageFormatForGpuBufferFormat(gpu_frame.format()),
          gpu_frame.width(), gpu_frame.height(),
          mediapipe::ImageFrame::kGlDefaultAlignmentBoundary);
      gpu_helper.BindFramebuffer(texture);
      const auto info =
          mediapipe::GlTextureInfoForGpuBufferFormat(gpu_frame.format(), 0);
      glReadPixels(0, 0, texture.width(), texture.height(), info.gl_format,
                   info.gl_type, output_frame->MutablePixelData());
      glFlush();
      texture.Release();
      return ::mediapipe::OkStatus();
    }));

// Convert back to opencv for display or saving.
cv::Mat output_frame_mat = mediapipe::formats::MatView(output_frame.get());
cv::cvtColor(output_frame_mat, output_frame_mat, cv::COLOR_RGB2BGR);
if (save_video) {
  if (!writer.isOpened()) {
    LOG(INFO) << "Prepare video writer.";
    writer.open(FLAGS_output_video_path,
                mediapipe::fourcc('a', 'v', 'c', '1'),  // .mp4
                capture.get(cv::CAP_PROP_FPS), output_frame_mat.size());
    RET_CHECK(writer.isOpened());
  }
  writer.write(output_frame_mat);
} else {
  cv::imshow(kWindowName, output_frame_mat);
  // Press any key to exit.
  const int pressed_key = cv::waitKey(5);
  if (pressed_key >= 0 && pressed_key != 255) grab_frames = false;
}
  }

  LOG(INFO) << "Shutting down.";
  if (writer.isOpened()) writer.release();
  MP_RETURN_IF_ERROR(graph.CloseInputStream(kInputStream));
  return graph.WaitUntilDone();
}

int main(int argc, char** argv) {
  google::InitGoogleLogging(argv[0]);
  gflags::ParseCommandLineFlags(&argc, &argv, true);
  ::mediapipe::Status run_status = RunMPPGraph();
  if (!run_status.ok()) {
LOG(ERROR) << "Failed to run the graph: " << run_status.message();
return EXIT_FAILURE;
  } else {
LOG(INFO) << "Success!";
  }
  return EXIT_SUCCESS;
}
    I0502 07:25:12.988380 27443 demo_run_graph_main_gpu.cc:57] Initialize the calculator graph.
I0502 07:25:12.994838 27443 demo_run_graph_main_gpu.cc:61] Initialize the GPU.
I0502 07:25:13.021072 27443 gl_context_egl.cc:158] Successfully initialized EGL. Major : 1 Minor: 5
I0502 07:25:13.069461 27452 gl_context.cc:324] GL version: 3.2 (OpenGL ES 3.2 NVIDIA 32.4.2)
I0502 07:25:13.069752 27443 demo_run_graph_main_gpu.cc:67] Initialize the camera or load the video.
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 2592 x 1944 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;

GST_ARGUS: 2592 x 1458 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 22000, max 358733000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/nvidia/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 2592 x 1944 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;

GST_ARGUS: 2592 x 1458 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 22000, max 358733000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/nvidia/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (1759) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module nvarguscamerasrc0 reported: Internal data stream error.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
[ WARN:0] global /home/nvidia/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (515) startPipeline OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
[ WARN:0] global /home/nvidia/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
I0502 07:25:14.172400 27443 demo_run_graph_main_gpu.cc:90] Start running the calculator graph.
I0502 07:25:14.173106 27443 demo_run_graph_main_gpu.cc:95] Start grabbing and processing frames.
I0502 07:25:14.173178 27443 demo_run_graph_main_gpu.cc:177] Shutting down.
INFO: Created TensorFlow Lite delegate for GPU.
[ WARN:0] global /home/nvidia/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
I0502 07:25:14.768599 27443 demo_run_graph_main_gpu.cc:191] Success!

after modifying the fragment to

 LOG(INFO) << "Initialize the camera or load the video.";
  cv::VideoCapture capture;
  const bool load_video = !FLAGS_input_video_path.empty();
  const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720 !  nvvidconv ! video/x-raw,format=I420 ! appsink";
  
if (load_video) {
    capture.open(FLAGS_input_video_path);
  } else {
    capture.open(gst, cv::CAP_GSTREAMER);
  }
  RET_CHECK(capture.isOpened());

  cv::VideoWriter writer;
  const bool save_video = !FLAGS_output_video_path.empty();
  if (!save_video) {
    cv::namedWindow(kWindowName, /*flags=WINDOW_AUTOSIZE*/ 1);
#if (CV_MAJOR_VERSION >= 3) && (CV_MINOR_VERSION >= 2)
    capture.set(cv::CAP_PROP_FRAME_WIDTH, 1280);
    capture.set(cv::CAP_PROP_FRAME_HEIGHT, 720);
    capture.set(cv::CAP_PROP_FPS, 120);
#endif

walkthrough

reference links

update,
The default mediapipe implementation doesn’t support CSI, as it doeesn’t have such line in the opencv file
demo_run_graph_main_gpu.cc
so in previous tests I had to use CSI sensor as usb-camera device using v4l2loopback.
This time we will try to get the mediapipe to access the CSI camera via nvargus somehow.
Sp once the mediapipe got build with the instruction mediapipe/README.md at master · AndreV84/mediapipe · GitHub
So the situation that I am getting into is as follows:

#export OPENCV_DIR=opencv-4.3.0-dev
#export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$OPENCV_DIR/lib
bazel build -c opt --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 mediapipe/examples/desktop/hand_tracking:hand_tracking_gpu

before running it I just adjusted one line of the file /home/user/mediapipe/mediapipe/examples/desktop/demo_run_graph_main_gpu.cc from mediapipe/demo_run_graph_main_gpu.cc at master · google/mediapipe · GitHub
so it has become to contain

const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720, framerate=30/1 !  nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";



also due to some misfortune opencv4.3.0 would throw error while building it so I moved to opencv 4.5.0 from 4.3.0

Thanks to @Honey_Patouceul for the suggested edits.
@AastaLLL it would be great if the pipeline could read directly from CSI sensor instead from emulated webcam made with v4l2loopback from CSI sensor.
So in this attempt I am using


cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -DWITH_CUBLAS=1 -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.5.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D  BUILD_opencv_python2=yes -D  BUILD_opencv_python3=yes  -D OPENCV_GENERATE_PKGCONFIG=ON-D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local/opencv-4.5.0-dev -D CUDNN_VERSION="8.0" -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1  -DWITH_OPENGL=ON -DWITH_QT=ON ../../opencv-4.5.0

then before running the bazel:

export OPENCV_VERSION=opencv-4.5.0-dev
 export LD_LIBRARY_PATH=/usr/local/$OPENCV_VERSION/lib

However, the results of executing of the example will be presented below

ERROR: /home/nvidia/mediapipe/mediapipe/calculators/util/BUILD:708:11: C++ compilation of rule '//mediapipe/calculators/util:detection_letterbox_removal_calculator' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 63 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
In file included from ./mediapipe/framework/formats/location.h:39:0,
                 from mediapipe/calculators/util/detection_letterbox_removal_calculator.cc:34:
./mediapipe/framework/port/opencv_core_inc.h:18:10: fatal error: opencv2/core/version.hpp: No such file or directory
 #include <opencv2/core/version.hpp>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~

so then I am trying to adjust the opencv paths in two files : WORKSPACE & third_party/opencv_linux.BUILD
after I am updating paths in these files to match with opencv installation it would be different error:

export OPENCV_VERSION=opencv-4.5.0-dev
nvidia@nvidia-desktop:~/mediapipe$ export LD_LIBRARY_PATH=/usr/local/$OPENCV_VERSION/lib

the rest is in the atatched log file
log.txt (46.6 KB)

Hi,

Since it has been a while, would you mind to summary the issue again?

Do you want the CSI camera support in OpenCV.
If yes, it do support but you will need to compile it with GStreamer flag.

The default OpenCV package in JetPack4.4. is enabled with GStreamer.
If v4.4.1 is acceptable, you can try it directly.

Thanks.

@AastaLLL Thank you for following up.
Last few attempts failed getting anything Mediapipe to work with various opencv version on the latest Jetpack release.
However, I shall try again and post the debug information.
Thank you very much.

Thanks.

OpenCV compiling might fail due to the cuDNN API change in recent JetPack.
To build it successfully, you can check the following GitHub for information:

@AastaLLL
Thank you for sharing.
However, I do not encounter any issue building any opencv version on Jetpack 4.4 right now.
Moreover I am trying 4.5 version of opencv among others.
Maybe you could help out reproducing/ resolving the issue at your end?
Right now I am getting the following

 bazel build -c opt --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11     mediapipe/examples/desktop/hand_tracking:hand_tracking_gpu
Starting local Bazel server and connecting to it...
INFO: Analyzed target //mediapipe/examples/desktop/hand_tracking:hand_tracking_gpu (128 packages loaded, 6128 targets configured).
INFO: Found 1 target...
INFO: From Compiling external/org_tensorflow/tensorflow/lite/delegates/gpu/api.cc:
In file included from external/opencl_headers/CL/cl.h:32:0,
                 from external/org_tensorflow/tensorflow/lite/delegates/gpu/api.h:42,
                 from external/org_tensorflow/tensorflow/lite/delegates/gpu/api.cc:16:
external/opencl_headers/CL/cl_version.h:34:104: note: #pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 220 (OpenCL 2.2)
 #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 220 (OpenCL 2.2)")

# export OPENCV_VERSION=opencv-4.5.0-dev
right now it wont run without the CSI sensor patch, as it abrupts with

bazel-out/aarch64-opt/bin/mediapipe/util/_objs/annotation_renderer/annotation_renderer.o:annotation_renderer.cc:function mediapipe::AnnotationRenderer::DrawGradientLine(mediapipe::RenderAnnotation const&): error: undefined reference to 'cv::rectangle(cv::_InputOutputArray const&, cv::Rect_<int>, cv::Scalar_<double> const&, int, int, int)'
bazel-out/aarch64-opt/bin/mediapipe/util/_objs/annotation_renderer/annotation_renderer.o:annotation_renderer.cc:function mediapipe::AnnotationRenderer::DrawText(mediapipe::RenderAnnotation const&): error: undefined reference to 'cv::getTextSize(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, double, int, int*)'
bazel-out/aarch64-opt/bin/mediapipe/util/_objs/annotation_renderer/annotation_renderer.o:annotation_renderer.cc:function mediapipe::AnnotationRenderer::DrawText(mediapipe::RenderAnnotation const&): error: undefined reference to 'cv::putText(cv::_InputOutputArray const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, cv::Point_<int>, int, double, cv::Scalar_<double>, int, int, bool)'
collect2: error: ld returned 1 exit status
Target //mediapipe/examples/desktop/hand_tracking:hand_tracking_gpu failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 741.540s, Critical Path: 413.70s
INFO: 2108 processes: 2108 linux-sandbox.
FAILED: Build did NOT complete successfully

do you mean it might be opencv compilling error despite opencv built and installed also working? [ tested with other samples]
Upd: seems resembling the issue Face Mesh undefined errors · Issue #666 · google/mediapipe · GitHub
thus might be paths issue

I build opencv with

cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -DWITH_CUBLAS=1 -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.5.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D OPENCV_GENERATE_PKGCONFIG=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local/opencv-4.5.0-dev -D CUDNN_VERSION="8.0" -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 ../../opencv-4.5.0

UPD:
Just got managed to build successfully; discrepancies were to opencv paths

INFO: Elapsed time: 727.343s, Critical Path: 367.90s
INFO: 2109 processes: 2109 linux-sandbox.
INFO: Build completed successfully, 2121 total actions

Next will try to add the CSI sensor support
Original file using /dev/video0 as webcam is attached
demo_run_graph_main_gpu.txt (7.8 KB)

it worked with CSI with the modified file below


demo_run_graph_main_gpu_mod.txt (8.0 KB)
shall jetson-utils be incorporated for further impovement?

we got through CSI sensor integration,
but now the issue is to incorporate jetson-utils
it seems failing due to path issue, that is if resolved causes some glog issue.
@AastaLLL maybe you could add?
Proposed patched file using jetson_utils
The error log in case of manually adding absolute path to cuda and cuda_runtime.h files in /usr/local/include/jetson-utils/cudaUtility.h

https://cdck-file-uploads-global.s3.dualstack.us-west-2.amazonaws.com/nvidia/original/3X/4/e/4eed0ad83c3b42b48f70e456a415734a75d04c94.txt
Without adding the absolute paths it would terminate with diffrerent error

ERROR: /home/nvidia/mediapipe/mediapipe/examples/desktop/BUILD:58:11: C++ compilation of rule '//mediapipe/examples/desktop:demo_run_graph_main_gpu' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 64 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
In file included from /usr/local/include/jetson-utils/imageFormat.h:28:0,
                 from /usr/local/include/jetson-utils/videoOptions.h:26,
                 from /usr/local/include/jetson-utils/videoSource.h:27,
                 from mediapipe/examples/desktop/demo_run_graph_main_gpu.cc:33:
/usr/local/include/jetson-utils/cudaUtility.h:27:10: fatal error: cuda_runtime.h: No such file or directory
 #include <cuda_runtime.h>
          ^~~~~~~~~~~~~~~~
compilation terminated.

The file with incorporated jetson-io
https://cdck-file-uploads-global.s3.dualstack.us-west-2.amazonaws.com/nvidia/original/3X/c/9/c971677b6227cba2e12de484ce8d9b30128585ae.txt

Hi,

jetson-utils should include CUDA path in the CMakeLists.txt already.
How do you manually include the file?

Thanks.

@AastaLLL
Thank you for following up
I just added absolute paths to the file that was missing cuda.h / cuda/runtime.h
in include like
include </usr/local/cuda/…/include/cuda)runtime.h>
include </usr/local/cuda/…/include/cuda.h>

RROR: /home/nvidia/mediapipe/mediapipe/examples/desktop/BUILD:58:11: C++ compilation of rule '//mediapipe/examples/desktop:demo_run_graph_main_gpu' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 63 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
In file included from /usr/local/include/jetson-utils/imageFormat.h:28:0,
                 from /usr/local/include/jetson-utils/videoOptions.h:26,
                 from /usr/local/include/jetson-utils/videoSource.h:27,
                 from mediapipe/examples/desktop/demo_run_graph_main_gpu.cc:33:
/usr/local/include/jetson-utils/cudaUtility.h:27:10: fatal error: cuda_runtime.h: No such file or directory
 #include <cuda_runtime.h>
          ^~~~~~~~~~~~~~~~
compilation terminated.

according to the errro the file is /usr/local/include/jetson-utils/cudaUtility.h
so I opened it with nano and set up ful labsolute path instead o relative environment dependent path that was kind of missed

@AastaLLL
It turned out that on Jetson we can run all desktop gpu samples from the list:

Cool ! Thanks for sharing this to us.