Video convert to gray scale with gst for visual odometry causing crash

Hello,
I wish to run visual odometry app with custom hardware, I successfully implemented a pipeline to retrieve image from my camera and split it.
I can convert the video feed to RGB with gst without issues but when I try to convert it to gray scale the application terminate unexpectedly. Could you try to spot my error please.

This is the line working :
"pipeline": "gst-launch-1.0 -v v4l2src device=/dev/video0 ! avdec_mjpeg ! videoconvert ! video/x-raw,format=RGB,height=480,framerate=30/1 ! appsink name = acquired_image "

This is what I want to do :
"pipeline": "gst-launch-1.0 -v v4l2src device=/dev/video0 ! avdec_mjpeg ! videoconvert ! video/x-raw,format=GRAY8,height=480,framerate=30/1 ! appsink name = acquired_image "

Thank you

This is the successful app :

{
  "name": "odov1",
  "modules": [
    "deepstream",
    "perception",
    "sight",
    "viewers"
  ],
  "graph": {
    "nodes": [
      {
        "name": "acquisition",
        "components": [
          {
            "name": "message_ledger",
            "type": "isaac::alice::MessageLedger"
          },
          {
            "name": "pipeline",
            "type": "isaac::deepstream::Pipeline"
          }
        ]
      },
      {
        "name": "compute",
        "components": [
          {
            "name": "message_ledger",
            "type": "isaac::alice::MessageLedger"
          },
          {
            "name": "operation1",
            "type": "isaac::perception::CropAndDownsampleCuda"
          },
	  {
            "name": "operation2",
            "type": "isaac::perception::CropAndDownsampleCuda"
          }
        ]
      },
      {
        "name": "publishing",
        "components": [
          {
            "name": "message_ledger",
            "type": "isaac::alice::MessageLedger"
          },
          {
            "name": "viewer1",
            "type": "isaac::viewers::ColorCameraViewer"
          },
          {
            "name": "viewer2",
            "type": "isaac::viewers::ColorCameraViewer"
          },
	 {
            "name": "viewer3",
            "type": "isaac::viewers::ColorCameraViewer"
          },
          {
            "name": "viewer1_widget",
            "type": "isaac::sight::SightWidget"
          },
          {
            "name": "viewer2_widget",
            "type": "isaac::sight::SightWidget"
          },
          {
            "name": "viewer3_widget",
            "type": "isaac::sight::SightWidget"
          }	  
        ]
      }
    ],
    "edges": [
      {
        "source": "acquisition/pipeline/acquired_image",
        "target": "compute/operation1/input_image"
      },
      {
        "source": "acquisition/pipeline/acquired_image",
        "target": "compute/operation2/input_image"
      },
      {
        "source": "acquisition/pipeline/acquired_image",
        "target": "publishing/viewer3/color_listener"
      },
      {
        "source": "compute/operation1/output_image",
        "target": "publishing/viewer1/color_listener"
      },
      {
       "source": "compute/operation2/output_image",
        "target": "publishing/viewer2/color_listener"
      }
    ]
  },
  "config": {
    "acquisition": {
      "pipeline": {
        "pipeline": "gst-launch-1.0 -v v4l2src device=/dev/video0  ! avdec_mjpeg ! videoconvert ! video/x-raw,format=RGB,height=480,framerate=30/1 ! appsink name = acquired_image "
       }
      },
      "compute": {
      "operation1": {
        "crop_start": [0, 0],
        "crop_size": [480, 640]
       },
      "operation2": {
        "crop_start": [0, 640],
        "crop_size": [480, 640]
       }
      },
      "publishing": {
      "viewer1_widget": {
        "title": "Viewer: Video Left",
        "type": "2d",
        "channels": [
          {
            "name": "publishing/viewer1/Color"
          }
        ]
      },
      "viewer2_widget": {
        "title": "Viewer: Video Right",
        "type": "2d",
        "channels": [
          {
            "name": "publishing/viewer2/Color"
          }
        ]
      },
       "viewer3_widget": {
        "title": "Viewer: Acquis ",
        "type": "2d",
        "channels": [
          {
            "name": "publishing/viewer3/Color"
          }
        ]
      }
    }
  }
}

I saw two other ways to do it but I don’t know what to do :
using Dali
using CUDA

Are you trying to use deepstream codelet to output grayscale (instead of rgb)? Is this where you crashing? before VIO?

deepstream codelet current do not support grayscale

1 Like

Thank you @atorabi
Yes, I’m using deepstream gstreamer and it’s not crashing anymore because I specified the video wight but it is very frustrating because I can get an output but it is unusable due to what seems like a synchronization issue.
I get a static image composed of many images, screenshot

same with GRAY16_XL

Hello @atorabi
Do you think that gray scale conversion with deepstream and Gstreamer will be supported in the next updates of isaac ?
Do you think that I could make it work by interfacing Dali or CUDA with isaac ?
Thanks a ton for your support

Little help please :)

Hi there,

sorry for delay. I asked for help. Hope we get back to you soon.

1 Like

Hi,
here is some lines from OpecCV tutorial located at :
<Isaac SDK Home>/apps/tutorials/opencv_edge_detection/EdgeDetector.cpp

  output.setColorSpace(ColorCameraProto::ColorSpace::GRAYSCALE);
  ToProto(std::move(output_image), output.getImage(), tx_output_image().buffers());

Maybe my issue come from not specifying the color space in the camera proto.

Hope to hear from you soon, thanks for the awesome support :D

I found 6 ways of doing a RGB to Gray conversion for my camera feed coming from a DeepStream Gstreamer pipeline but I don’t know which is the best to run on a Jetson Nano with Isaac :

  • DeepStream
  • Dali
  • Jetson Linux Multimedia
  • Isaac ColorSpaceConverter
  • CUDA without OpenCV
  • CUDA with OpenCV

DeepStream :
DeepStream should support RGB to Gray conversion by using Gstreamer but when I try the following pipeline it output an unsynchronized feed :
"pipeline": "gst-launch-1.0 -v v4l2src device=/dev/video0 ! avdec_mjpeg ! videoconvert ! video/x-raw,format=GRAY8,height=480,width=640,framerate=30/1 ! appsink name = acquired_image "

Dali :
Dali have a color space conversion function but they say that :

The C++ API is experimental, unstable and can change without notice.

So not very reassuring.

Jetson Linux Multimedia :
The Multimedia API have a V4L2 color conversion function but I don’t know if the function can take a RGB camera stream coming from Isaac.

Isaac ColorSpaceConverter :
isaac.perception.ColorSpaceConverter can convert RGB(A) to Gray but the input and output is necessarly a ImageProto and I would need a ColorCameraProto to make it work. Because my project need a real time latency I don’t think that a Imageproto to Cameraproto function could do the trick, seem like the only way to make this work is to wait for the official dev team to make some modifications on this (wink).

CUDA without OpenCV :
This way would require to merge the OpecCV tutorial located at :
<Isaac SDK Home>/apps/tutorials/opencv_edge_detection/EdgeDetector.cpp
with the CUDARGB2Y GitHub project

CUDA with OpenCV :
This way would require to merge the OpecCV tutorial located at :
<Isaac SDK Home>/apps/tutorials/opencv_edge_detection/EdgeDetector.cpp
with a custom cuda kernel
This is the OpenCV tuto :

/*
Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.

NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited.
*/
#include "EdgeDetector.hpp"

#include <memory>
#include <utility>

#include "engine/core/assert.hpp"

#include "engine/gems/image/conversions.hpp"
#include "engine/gems/image/utils.hpp"
#include "messages/camera.hpp"
#include "messages/math.hpp"

#include "opencv2/highgui.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"

namespace isaac {
namespace opencv {

void EdgeDetector::start() {
  tickOnMessage(rx_input_image());
}

void EdgeDetector::tick() {
  const int kernel_size = get_kernel_size();
  switch (kernel_size) {
    case 1:
    case 3:
    case 5:
    case 7:
      break;
    default:
      LOG_ERROR("Invalid Kernel size for edge detector, Must be 1, 3, 5 or 7. Aborting.");
      return;
  }

  auto input = rx_input_image().getProto();
  ImageConstView3ub input_image;
  bool ok = FromProto(input.getImage(), rx_input_image().buffers(), input_image);
  ASSERT(ok, "Failed to deserialize the input image");

  const size_t rows = input_image.rows();
  const size_t cols = input_image.cols();
  Image1ub output_image(rows, cols);
  cv::Mat image =
      cv::Mat(rows, cols, CV_8UC3,
              const_cast<void*>(static_cast<const void*>(input_image.data().pointer())));
  cv::Mat gradient = cv::Mat(rows, cols, CV_8U, static_cast<void*>(output_image.data().pointer()));

  cv::Mat scratch, scratch_gray_scale;
  cv::GaussianBlur(image, scratch, cv::Size(kernel_size, kernel_size), 0, 0, cv::BORDER_DEFAULT);
  cv::cvtColor(scratch, scratch_gray_scale, cv::COLOR_RGB2GRAY);

  cv::Mat gradient_x, gradient_y;
  cv::Mat abs_gradient_x, abs_gradient_y;
  cv::Sobel(scratch_gray_scale, gradient_x, CV_16S, 1, 0, kernel_size);
  cv::Sobel(scratch_gray_scale, gradient_y, CV_16S, 0, 1, kernel_size);
  cv::convertScaleAbs(gradient_x, abs_gradient_x);
  cv::convertScaleAbs(gradient_y, abs_gradient_y);
  cv::addWeighted(abs_gradient_x, 0.5, abs_gradient_y, 0.5, 0, gradient);

  auto output = tx_output_image().initProto();

  auto in_pinhole = input.getPinhole();
  const Vector2d focal = FromProto(in_pinhole.getFocal());
  const Vector2d center = FromProto(in_pinhole.getCenter());

  auto out_pinhole = output.initPinhole();
  ToProto(focal, out_pinhole.getFocal());
  ToProto(center, out_pinhole.getCenter());
  out_pinhole.setCols(in_pinhole.getCols());
  out_pinhole.setRows(in_pinhole.getRows());

  if (input.hasDistortion()) {
    auto in_distortion = input.getDistortion();
    auto out_distortion = output.initDistortion();
    auto distortion_coeff = FromProto(in_distortion.getCoefficients());
    out_distortion.setModel(in_distortion.getModel());
    ToProto(distortion_coeff, out_distortion.getCoefficients());
  }

  output.setColorSpace(ColorCameraProto::ColorSpace::GRAYSCALE);
  ToProto(std::move(output_image), output.getImage(), tx_output_image().buffers());
  tx_output_image().publish(rx_input_image().acqtime());
}

}  // namespace opencv
}  // namespace isaac

and this is the custom kernel :

#include<iostream>
#include<cstdio>
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<cuda_runtime.h>

using std::cout;
using std::endl;

static inline void _safe_cuda_call(cudaError err, const char* msg, const char* file_name, const int line_number)
{
	if(err!=cudaSuccess)
	{
		fprintf(stderr,"%s\n\nFile: %s\n\nLine Number: %d\n\nReason: %s\n",msg,file_name,line_number,cudaGetErrorString(err));
		std::cin.get();
		exit(EXIT_FAILURE);
	}
}

#define SAFE_CALL(call,msg) _safe_cuda_call((call),(msg),__FILE__,__LINE__)


__global__ void bgr_to_gray_kernel( unsigned char* input, 
									unsigned char* output, 
									int width,
									int height,
									int colorWidthStep,
									int grayWidthStep)
{
	//2D Index of current thread
	const int xIndex = blockIdx.x * blockDim.x + threadIdx.x;
	const int yIndex = blockIdx.y * blockDim.y + threadIdx.y;

	//Only valid threads perform memory I/O
	if((xIndex<width) && (yIndex<height))
	{
		//Location of colored pixel in input
		const int color_tid = yIndex * colorWidthStep + (3 * xIndex);
		
		//Location of gray pixel in output
		const int gray_tid  = yIndex * grayWidthStep + xIndex;

		const unsigned char blue	= input[color_tid];
		const unsigned char green	= input[color_tid + 1];
		const unsigned char red		= input[color_tid + 2];

		const float gray = red * 0.3f + green * 0.59f + blue * 0.11f;

		output[gray_tid] = static_cast<unsigned char>(gray);
	}
}

void convert_to_gray(const cv::Mat& input, cv::Mat& output)
{
	//Calculate total number of bytes of input and output image
	const int colorBytes = input.step * input.rows;
	const int grayBytes = output.step * output.rows;

	unsigned char *d_input, *d_output;

	//Allocate device memory
	SAFE_CALL(cudaMalloc<unsigned char>(&d_input,colorBytes),"CUDA Malloc Failed");
	SAFE_CALL(cudaMalloc<unsigned char>(&d_output,grayBytes),"CUDA Malloc Failed");

	//Copy data from OpenCV input image to device memory
	SAFE_CALL(cudaMemcpy(d_input,input.ptr(),colorBytes,cudaMemcpyHostToDevice),"CUDA Memcpy Host To Device Failed");

	//Specify a reasonable block size
	const dim3 block(16,16);

	//Calculate grid size to cover the whole image
	const dim3 grid((input.cols + block.x - 1)/block.x, (input.rows + block.y - 1)/block.y);

	//Launch the color conversion kernel
	bgr_to_gray_kernel<<<grid,block>>>(d_input,d_output,input.cols,input.rows,input.step,output.step);

	//Synchronize to check for any kernel launch errors
	SAFE_CALL(cudaDeviceSynchronize(),"Kernel Launch Failed");

	//Copy back data from destination device meory to OpenCV output image
	SAFE_CALL(cudaMemcpy(output.ptr(),d_output,grayBytes,cudaMemcpyDeviceToHost),"CUDA Memcpy Host To Device Failed");

	//Free the device memory
	SAFE_CALL(cudaFree(d_input),"CUDA Free Failed");
	SAFE_CALL(cudaFree(d_output),"CUDA Free Failed");
}

int main()
{
	std::string imagePath = "image.jpg";

	//Read input image from the disk
	cv::Mat input = cv::imread(imagePath,CV_LOAD_IMAGE_COLOR);

	if(input.empty())
	{
		std::cout<<"Image Not Found!"<<std::endl;
		std::cin.get();
		return -1;
	}

	//Create output image
	cv::Mat output(input.rows,input.cols,CV_8UC1);

	//Call the wrapper function
	convert_to_gray(input,output);

	//Show the input and output
	cv::imshow("Input",input);
	cv::imshow("Output",output);
	
	//Wait for key press
	cv::waitKey();

	return 0;
}

Please help me choose between this methodes so I don’t try to do something impossible or too hard.

Thank you so much !

Hello,
I succeeded to convert a camera RGB input into Gray Scale with the 6th method.
It was a lot easier than I thought, this is what I did :
I first thought that I needed to implement a long code into the openCV tuto of Isaac but by reading in details the Isaac tuto I found this line : cv::cvtColor(image, gradient, cv::COLOR_RGB2GRAY);
So yes the openCV tuto already convert a mat from RGB to Grey so I just needed to delete the lines related to the Sobel function and it was done !
Here is the full code :

EdgeDetector.cpp (just delete the Sobel lines and change the cvtColor param):

/*
Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.

NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited.
*/
#include "EdgeDetector.hpp"

#include <memory>
#include <utility>

#include "engine/core/assert.hpp"

#include "engine/gems/image/conversions.hpp"
#include "engine/gems/image/utils.hpp"
#include "messages/camera.hpp"
#include "messages/math.hpp"

#include "opencv2/highgui.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"

namespace isaac {
namespace opencv {

void EdgeDetector::start() {
  tickOnMessage(rx_input_image());
}

void EdgeDetector::tick() {
  //const int kernel_size = get_kernel_size();

  auto input = rx_input_image().getProto();
  ImageConstView3ub input_image;
  bool ok = FromProto(input.getImage(), rx_input_image().buffers(), input_image);
  ASSERT(ok, "Failed to deserialize the input image");

  const size_t rows = input_image.rows();
  const size_t cols = input_image.cols();
  Image1ub output_image(rows, cols);
  cv::Mat image =  cv::Mat(rows, cols, CV_8UC3, const_cast<void*>(static_cast<const void*>(input_image.data().pointer())));
	
  cv::Mat gradient = cv::Mat(rows, cols, CV_8U, static_cast<void*>(output_image.data().pointer()));
  cv::cvtColor(image, gradient, cv::COLOR_RGB2GRAY);


  auto output = tx_output_image().initProto();

  auto in_pinhole = input.getPinhole();
  const Vector2d focal = FromProto(in_pinhole.getFocal());
  const Vector2d center = FromProto(in_pinhole.getCenter());

  auto out_pinhole = output.initPinhole();
  ToProto(focal, out_pinhole.getFocal());
  ToProto(center, out_pinhole.getCenter());
  out_pinhole.setCols(in_pinhole.getCols());
  out_pinhole.setRows(in_pinhole.getRows());

  if (input.hasDistortion()) {
    auto in_distortion = input.getDistortion();
    auto out_distortion = output.initDistortion();
    auto distortion_coeff = FromProto(in_distortion.getCoefficients());
    out_distortion.setModel(in_distortion.getModel());
    ToProto(distortion_coeff, out_distortion.getCoefficients());
  }

  output.setColorSpace(ColorCameraProto::ColorSpace::GRAYSCALE);
  ToProto(std::move(output_image), output.getImage(), tx_output_image().buffers());
  tx_output_image().publish(rx_input_image().acqtime());
}

}  // namespace opencv
}  // namespace isaac

EdgeDetector.hpp (nothing to change):

/*
Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.

NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited.
*/
#pragma once

#include "engine/alice/alice_codelet.hpp"
#include "engine/core/image/image.hpp"
#include "messages/camera.capnp.h"

namespace isaac {
namespace opencv {

// This codelet computes the Sobel Gradient of an image using OpenCV
class EdgeDetector : public alice::Codelet {
 public:
  void start() override;
  void tick() override;

  // Input image and model parameters
  ISAAC_PROTO_RX(ColorCameraProto, input_image);
  // Output image and model parameters
  ISAAC_PROTO_TX(ColorCameraProto, output_image);
  // OpenCV kernel size for Sobel Operator, per OpenCV should be 1, 3, 5 or 7
  ISAAC_PARAM(int, kernel_size, 1);
};

}  // namespace opencv
}  // namespace isaac

ISAAC_ALICE_REGISTER_CODELET(isaac::opencv::EdgeDetector);

opencv_edge_detection.app.json (just added a pipeline) :

{
  "name": "opencv_edge_detection",
  "modules": [
      "//apps/tutorials/opencv_edge_detection:edge_detector",
      "sensors:v4l2_camera",
      "viewers",
      "deepstream"
  ],
  "graph": {
       "nodes": [
         {
           "name": "camera",
           "components": [
             {
               "name": "MessageLedger",
               "type": "isaac::alice::MessageLedger"
             },
          {
            "name": "pipeline",
            "type": "isaac::deepstream::Pipeline"
          }
           ]
         },
         {
           "name": "edge_detector",
           "components": [
             {
               "name": "MessageLedger",
               "type": "isaac::alice::MessageLedger"
             },
             {
               "name": "EdgeDetector",
               "type": "isaac::opencv::EdgeDetector"
             }
           ]
         },
         {
           "name": "viewer",
           "components": [
             {
               "name": "MessageLedger",
               "type": "isaac::alice::MessageLedger"
             },
             {
               "name": "ColorCameraViewer",
               "type": "isaac::viewers::ColorCameraViewer"
             }
           ]
         }
       ],
       "edges": [
         {
           "source": "camera/pipeline/acquired_image",
           "target": "edge_detector/EdgeDetector/input_image"
         },
         {
           "source": "edge_detector/EdgeDetector/output_image",
           "target": "viewer/ColorCameraViewer/color_listener"
         }
       ]
  },
  "config": {
    "camera": {
      "pipeline": {
        "pipeline": "gst-launch-1.0 -v v4l2src device=/dev/video0  ! avdec_mjpeg ! videoconvert ! video/x-raw,format=RGB,height=480,wight=1280,framerate=30/1 ! appsink name = acquired_image "
       }
    },
    "websight": {
      "WebsightServer": {
        "port": 3000,
        "ui_config": {
          "windows": {
            "Edge Detection": {
              "renderer": "2d",
              "dims": {
                "width": 640,
                "height": 480
              },
              "channels": [
                {
                  "name": "opencv_edge_detection/viewer/ColorCameraViewer/Color"
                }
              ]
            }
          }
        }
      }
    }
  }
}

Thanks to everyone for reading me !

Hi there,

Gstream in Isaac doesn’t support grayscale output. Gstream itself supports it, but Isaac only accepts CPU/RGB.

In Isaac there is ColorSpaceConverter codelet, that can convert RGB to Grayscale.
https://docs.nvidia.com/isaac/isaac/doc/doc/component_api.html?highlight=colorspaceconverter#isaac-perception-colorspaceconverter

1 Like

My code is working perfectly but thank you!

Thank you so much Planktos to share your solution! I am glad you found a way with OpenCV. I hope it helps other developers too. Let us know if you think this should be included in documentation.