Stream from Camera, edit frame and save as mp4

Hi, I am trying to make an efficient application with a few simple steps.

First, through a CSI-2 camera, the frames are obtained, then at each frame I must draw on the image (ideally gpumat opencv) and finally generate an mp4. The time of the video is not defined, but I raise it as a process and it turns on and off by means of a command.

I’ve been reading about multimedia l4t and visionworks, specifically the examples:

  • 03_video_cuda_end (l4t).
  • 10_camera_recording (l4t).
  • nvx_sample_nvgstcamera_capture (visionworks).

But it is not very clear to me how to carry out the process. Initially I thought I would use part of the code from nvx_sample_nvgstcamera_capture to get the frame, transfer to opencv (is there a way to avoid a transfer to cpu?), And finally use 03_video_cuda_end (I have not found anything inside visionworks to create a video).

Can someone help me to take the first steps, I am somewhat lost with so many libraries and I do not know if this example already exists or there is something similar (because I cannot believe that no one has had a similar need).

Thanks, Adrià

Hi,
There are samples for mapping NvBuffer to cv::gpu::gpuMat in gstreamer and jetson_multimedia_api. Please take a look at
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL
LibArgus EGLStream to nvivafilter - #14 by DaneLLL

With the approach you can apply CUDA filters to the buffers directly. Please check and give it a try.

Thank you very much for the reply.

The purpose of my project is: to record a game and be able to put the score on top of the game, this video is captured through nvcamera and saved in mp4.

For now I’m trying to pipeline fully with VisionWorks, but I’m having a lot of problems… I have based on the example: /usr/share/visionworks/sources/samples/nvgstcamera_capture

1 - First I get the frame from std::unique_ptr<ovxio::FrameSource>, I have also tried to force RGB instead of RGBX but it has not worked.
2 - I change the color space from RGBX to RGB (vxuColorConvert).
3 - I select the area where I want the marker and I get that part and transform it to OpenCV Mat (VXImageToCVMatMapper).
4 - I paste the png of the marker on top of the clipping.
5 - I write anything.
6 I convert RGB to RGBX again.

Code:

/*
# Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

#include <iostream>
#include <sstream>
#include <iomanip>
#include <memory>
#include <opencv2/opencv.hpp>
#include <opencv2/freetype.hpp>

#include <VX/vx.h>
#include <NVX/nvx_timer.hpp>

#include "OVX/FrameSourceOVX.hpp"
#include "OVX/RenderOVX.hpp"
#include "NVX/Application.hpp"
#include "OVX/UtilityOVX.hpp"
#include "NVX/nvx_opencv_interop.hpp"
//#include <opencv2/core.hpp>
//#include<opencv2/imgproc.hpp>
//#include <opencv2/highgui.hpp>

struct EventData
{
    EventData(): alive(true), pause(false) {}

    bool alive;
    bool pause;
};

static void keyboardEventCallback(void* context, vx_char key, vx_uint32 /*x*/, vx_uint32 /*y*/)
{
    EventData* eventData = static_cast<EventData*>(context);
    if (key == 27) // escape
    {
        eventData->alive = false;
    }
    else if (key == 32)
    {
        eventData->pause = !eventData->pause;
    }
}

static void parseResolution(const std::string & resolution, ovxio::FrameSource::Parameters & config)
{
    std::istringstream stream(resolution);
    std::string item;
    vx_uint32 * frameParams[] = { &config.frameWidth, &config.frameHeight };
    vx_uint32 index = 0;

    while (std::getline(stream, item, 'x'))
    {
        std::stringstream ss(item);
        ss >> *frameParams[index++];
    }
}

void draw_transparency(
    cv::Mat& bg, 
    cv::Mat counter,
    cv::Mat* counter_rgb,
    cv::Mat counter_mask
){
    //cv::merge(counter_rgb, 3, counter);
    counter.copyTo(bg, counter_mask);
}

//
// main - Application entry point
//

int main(int argc, char** argv)
{
    nvxio::Application &app = nvxio::Application::get();
    ovxio::printVersionInfo();

    ovxio::FrameSource::Parameters config;
    config.frameWidth = 1280;
    config.frameHeight = 720;
    //config.format = VX_DF_IMAGE_RGB;

    //
    // Parse command line arguments
    //

    std::string resolution = "1280x720", input = "device:///nvcamera";

    app.setDescription("This sample captures frames from NVIDIA GStreamer camera");
    app.addOption('r', "resolution", "Input frame resolution", nvxio::OptionHandler::oneOf(&resolution,
        { "2592x1944", "2592x1458", "1280x720", "640x480" }));

    app.init(argc, argv);

    parseResolution(resolution, config);

    //
    // Create OpenVX context
    //

    ovxio::ContextGuard context;

    //
    // Messages generated by the OpenVX framework will be processed by ovxio::stdoutLogCallback
    //

    vxRegisterLogCallback(context, &ovxio::stdoutLogCallback, vx_false_e);

    //
    // Create a Frame Source
    //

    std::unique_ptr<ovxio::FrameSource> source(ovxio::createDefaultFrameSource(context, input));
    if (!source)
    {
        std::cout << "Error: cannot open source!" << std::endl;
        return nvxio::Application::APP_EXIT_CODE_NO_RESOURCE;
    }

    if (!source->setConfiguration(config))
    {
        std::cout << "Error: cannot setup configuration the framesource!" << std::endl;
        return nvxio::Application::APP_EXIT_CODE_INVALID_VALUE;
    }

    if (!source->open())
    {
        std::cout << "Error: cannot open source!" << std::endl;
        return nvxio::Application::APP_EXIT_CODE_NO_RESOURCE;
    }

    config = source->getConfiguration();

    //
    // Create a Render
    //

    //std::unique_ptr<ovxio::Render> render(ovxio::createVideoRender(context, "~/test.avi", config.frameWidth, config.frameHeight));
    std::unique_ptr<ovxio::Render> render(ovxio::createDefaultRender(
            context, "NVIDIA GStreamer Camera Capture Sample", config.frameWidth, config.frameHeight));
    if (!render)
    {
        std::cout << "Error: Cannot open default render!" << std::endl;
        return nvxio::Application::APP_EXIT_CODE_NO_RENDER;
    }

    EventData eventData;
    render->setOnKeyboardEventCallback(keyboardEventCallback, &eventData);

    vx_image frame = vxCreateImage(context, config.frameWidth,
                                   config.frameHeight, config.format);
    //nvx_cv::VXImageToCVMatMapper to_cv(frame);

    NVXIO_CHECK_REFERENCE(frame);

    nvx::Timer totalTimer;
    totalTimer.tic();

    """
        SCOREBOARD
    """
    cv::Mat counter_orig = cv::imread("/home/totolia/nvgstcamera_capture/paddle_scoreboard.png", cv::IMREAD_UNCHANGED);
    cv::Mat counter;
    cv::cvtColor(counter_orig, counter, cv::COLOR_BGRA2RGBA); counter_orig.release();

    cv::split(counter, counter_layers);
    std::vector<cv::Mat> counter_rgb_list = {
        counter_layers[0], 
        counter_layers[1], 
        counter_layers[2]
    };
    cv::Mat counter_rgb; 
    cv::merge(counter_rgb_list, counter_rgb); counter_rgb_list.release();
    cv::Mat counter_mask = counter_layers[3]; counter_layers.release();
    
    """
        FONT
    """
    cv::Ptr<cv::freetype::FreeType2> ft_bold;
    ft_bold = cv::freetype::createFreeType2();
    ft_bold->loadFontData("/home/totolia/nvgstcamera_capture/fonts/Roboto-Bold.ttf", 0);

    cv::Ptr<cv::freetype::FreeType2> ft_light;
    ft_light = cv::freetype::createFreeType2();
    ft_light->loadFontData("/home/totolia/nvgstcamera_capture/fonts/Roboto-Light.ttf", 0);

    cv::Ptr<cv::freetype::FreeType2> ft_thin;
    ft_thin = cv::freetype::createFreeType2();
    ft_thin->loadFontData("/home/totolia/nvgstcamera_capture/fonts/Roboto-Thin.ttf", 0);

    cv::Ptr<cv::freetype::FreeType2> ft_medium;
    ft_medium = cv::freetype::createFreeType2();
    ft_medium->loadFontData("/home/totolia/nvgstcamera_capture/fonts/Roboto-Medium.ttf", 0);

    // tmp
    vx_image frame_cpy = vxCreateImage(context, config.frameWidth, config.frameHeight, VX_DF_IMAGE_RGB);
    while (eventData.alive)
    {
        ovxio::FrameSource::FrameStatus status = ovxio::FrameSource::OK;
        if (!eventData.pause)
        {
            status = source->fetch(frame);
        }

        switch(status)
        {
        case ovxio::FrameSource::OK:
            {
                // RGBX -> RGB
                vxuColorConvert(context, frame, frame_cpy);

                // Compute ROI
                int aux_w1 = ((int)config.frameWidth / 2) - ((int)counter.cols / 2);
                int aux_w2 = aux_w1 + counter.cols;
                vx_rectangle_t rect = {aux_w1, 0, aux_w2, counter.rows};
                nvx_cv::VXImageToCVMatMapper to_cv(frame_cpy, 0, &rect, VX_READ_AND_WRITE, VX_MEMORY_TYPE_HOST);

                // Paste counter -> ROI
                cv::Mat draw_counter = to_cv.getMat();
                counter_rgb.copyTo(draw_counter, counter_mask);                

                // Draw on ROI
                ft_medium->putText(
                    draw_counter, 
                    "1 - 1",
                    cv::Point(100, 150),
                    5, 
                    cv::Scalar(0, 255, 0, 255), 
                    -1, 
                    8,
                    false
                );
                
                vxuColorConvert(context, frame_cpy, frame);


                render->putImage(frame);

                if (!render->flush())
                    eventData.alive = false;
            }
            break;
        case ovxio::FrameSource::TIMEOUT:
            {
                // Do nothing
            }
            break;
        case ovxio::FrameSource::CLOSED:
            eventData.alive = false;
            break;
        }
    }

    //
    // Release all objects
    //
    vxReleaseImage(&frame);

    return nvxio::Application::APP_EXIT_CODE_SUCCESS;
}

I have also had to make some changes in the Makefile to incorporate the opencv library and the freetype (new version).

Code:

# Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

# OS info
OSLOWER := $(shell uname -s 2>/dev/null | tr "[:upper:]" "[:lower:]")

OS_ARCH := $(shell uname -m | sed -e "s/i386/i686/")

# Take command line flags that override any of these settings
ifeq ($(i386),1)
	OS_ARCH := i686
endif

ifeq ($(x86_64),1)
	OS_ARCH := x86_64
endif

ifeq ($(ARMv7),1)
	OS_ARCH := armv7l
endif

ifeq ($(ARMv8),1)
	OS_ARCH := aarch64
endif

CXXFLAGS += -std=c++0x $(shell pkg-config --cflags opencv4)
CXXFLAGS += -DCUDA_API_PER_THREAD_DEFAULT_STREAM -DUSE_GUI=1 -DUSE_GLFW=1 -DUSE_GLES=1 -DUSE_GSTREAMER=1 -DUSE_NVGSTCAMERA=1 -DUSE_GSTREAMER_OMX=1

ifneq ($(VIBRANTE_TOOLCHAIN_SYSROOT),)
	CCFLAGS += --sysroot="$(VIBRANTE_TOOLCHAIN_SYSROOT)"
endif

# Configuration-specific build flags
ifeq ($(dbg),1)
	CCFLAGS += -g
	TARGET := debug
else
	CCFLAGS += -O3 -DNDEBUG
	TARGET := release
endif

# check visionworks availability
VISION_WORKS_EXISTS := $(shell pkg-config --exists visionworks && echo "1" || echo "0")
ifeq ($(VISION_WORKS_EXISTS), 0)
$(error You must put directory containing visionworks.pc to the PKG_CONFIG_PATH environment variable)
endif

EXTERNAL_CFLAGS :=
EXTERNAL_LIBS :=

EXTERNAL_CFLAGS += $(shell pkg-config --cflags cudart-10.2)
EXTERNAL_LIBS += $(shell pkg-config --libs cudart-10.2)
EXTERNAL_CFLAGS += $(shell pkg-config --cflags visionworks)
EXTERNAL_LIBS += $(shell pkg-config --libs visionworks)

EXTERNAL_CFLAGS += $(shell pkg-config --cflags opencv4)
EXTERNAL_LIBS += $(shell pkg-config --libs opencv4)

EXTERNAL_CFLAGS += $(shell pkg-config --cflags freetype2)
EXTERNAL_LIBS += $(shell pkg-config --libs freetype2)



EIGEN_CFLAGS := -I../../3rdparty/eigen

NVXIO_CFLAGS := -I../../nvxio/include -I../../nvxio/src/ -I../../nvxio/src/NVX/
OVXIO_LIBS := ../../libs/$(OS_ARCH)/$(OSLOWER)/$(TARGET)$(if $(abi),/$(abi))/libovx.a

INCLUDES := $(EXTERNAL_CFLAGS)
INCLUDES += 
INCLUDES += $(NVXIO_CFLAGS)
INCLUDES +=  -I../../3rdparty/opengl -I../../3rdparty/glfw3/include  #-I../../3rdparty/freetype/include
INCLUDES += $(EIGEN_CFLAGS)

# to ensure correct linkage with NVIDIA GLES if MESA version is installed
LIBRARIES += -L"$(PKG_CONFIG_SYSROOT_DIR)/usr/lib"

ifneq ($(VIBRANTE_TOOLCHAIN_SYSROOT),)
	LIBRARIES += -L"$(VIBRANTE_TOOLCHAIN_SYSROOT)/usr/lib"
endif

LIBRARIES += $(OVXIO_LIBS)
#../../3rdparty/freetype/libs/libfreetype.a
LIBRARIES +=  ../../3rdparty/glfw3/libs/libglfw3.a /usr/lib/aarch64-linux-gnu/tegra-egl/libGLESv2_nvidia.so.2 -L/usr/lib/aarch64-linux-gnu -lEGL $(shell pkg-config --libs xrandr xi xxf86vm x11)
LIBRARIES +=  $(shell pkg-config --libs gstreamer-base-1.0 gstreamer-pbutils-1.0 gstreamer-app-1.0)
LIBRARIES +=  /usr/lib/aarch64-linux-gnu/tegra/libcuda.so
LIBRARIES += $(EXTERNAL_LIBS)
LIBRARIES += 

LDFLAGS += -Wl,--allow-shlib-undefined -pthread

ifneq ($(PKG_CONFIG_SYSROOT_DIR),)
	ifeq ($(ARMv7),1)
		LDFLAGS += -Wl,-rpath-link="$(PKG_CONFIG_SYSROOT_DIR)/lib/arm-linux-gnueabihf"
		LDFLAGS += -Wl,-rpath-link="$(PKG_CONFIG_SYSROOT_DIR)/usr/lib"
		LDFLAGS += -Wl,-rpath-link="$(PKG_CONFIG_SYSROOT_DIR)/usr/lib/arm-linux-gnueabihf"
	endif
endif

# show libraries used by linker in debug mode
ifeq ($(dbg),1)
	LDFLAGS += -Wl,--trace
endif

CUDA_LIB_PATH := $(subst -L$(PKG_CONFIG_SYSROOT_DIR),,$(shell pkg-config --libs-only-L cudart-10.2))
LDFLAGS += -Wl,-rpath=$(CUDA_LIB_PATH)

CPP_FILES := $(wildcard *.cpp)
OBJ_DIR := obj/$(TARGET)
OBJ_FILES_CPP := $(addprefix $(OBJ_DIR)/,$(notdir $(CPP_FILES:.cpp=.o)))

OUTPUT_DIR := ../../bin/$(OS_ARCH)/$(OSLOWER)/$(TARGET)$(if $(abi),/$(abi))

################################################################################

# Target rules
all: build

build: $(OUTPUT_DIR)/nvx_sample_nvgstcamera_capture

$(OBJ_DIR):
	mkdir -p $(OBJ_DIR)

$(OBJ_DIR)/%.o: %.cpp | $(OBJ_DIR)
	$(CXX) $(INCLUDES) $(CCFLAGS) $(CXXFLAGS) -o $@ -c $<

$(OUTPUT_DIR):
	mkdir -p $(OUTPUT_DIR)

$(OUTPUT_DIR)/nvx_sample_nvgstcamera_capture: $(OBJ_FILES_CPP) $(OVXIO_LIBS) | $(OUTPUT_DIR)
	$(CXX) $(LDFLAGS) $(CCFLAGS) $(CXXFLAGS) -o $@ $^ $(LIBRARIES)

run: build
	./$(OUTPUT_DIR)/nvx_sample_nvgstcamera_capture

clean:
	rm -f $(OBJ_FILES_CPP)
	rm -f $(OUTPUT_DIR)/nvx_sample_nvgstcamera_capture

$(OVXIO_LIBS):
	+@$(MAKE) -C ../../nvxio

Is it the proper way to do it? For now I only get errors or messages similar to: an image can be accesed only once at a time.

Many thanks, Adrià

Hi,
Please check if you can use VPI for your use-case. The document is at:
VPI - Vision Programming Interface: Main Page

We have migrated most functions from VisionWorks to VPI. If the required functions are in VPI, we would suggest use VPI. After installation, you will see samples in

/opt/nvidia/vpi1/

If you don’t need to process whole frame and only put some GUI, you may run like:

$ gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! nvivafilter ! nvoverlaysink

And use CAIRO APIs in nvviafilter plugin. May refer to
Tx2-4g r32.3.1 nvivafilter performance - #16 by DaneLLL

Okay, what I don’t quite understand is how to do then:

  • The camera reading, there seems to be nothing equivalent in VPI (I should continue using VisionWorks right?).
  • Transform the vx_image image to VPI (I don’t see any method to do it in the documentation).

I will also look at creating a filter, what is not clear to me is if the filter itself can contain status (I need to be able to receive the counter information from another process).

Edit:
The tried to run the example: nvsample_cudaprocess.After several hours I have gotten it to work.
I have managed to execute the cuda-process, I have had to modify both the image format by CU_EGL_COLOR_FORMAT_ABGR (what I do not understand in the documentation is that ABGR says that the byte order is RGBA, that is to say the other way around).

gst-launch-1.0 nvarguscamerasrc ! \
  'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
  format=(string)NV12, framerate=(fraction)20/1' ! \
  nvivafilter cuda-process=true \
  customer-lib-name="/home/totolia/nvsample_cudaprocess_src_2/nvsample_cudaprocess/libnvsample_cudaprocess.so" ! \
  'video/x-raw(memory:NVMM), format=RGBA'! \
  nvvidconv ! omxh264enc ! h264parse ! mp4mux ! \
  filesink location="/home/totolia/testout.mp4" -e

But when I try to use the same pipeline with post-process=true and try to use cairos to draw, I obtain the problem that my pipeline is RGBA and Cairos is ARGB (pipeline):

gst-launch-1.0 nvarguscamerasrc ! \
  'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
  format=(string)NV12, framerate=(fraction)20/1' ! \
  nvvidconv ! \
  nvivafilter post-process=true \
  customer-lib-name="/home/totolia/nvsample_cudaprocess_src_2/nvsample_cudaprocess//libnvsample_cudaprocess.so" ! \
   'video/x-raw(memory:NVMM), format=RGBA'! \
    nvvidconv ! omxh264enc ! h264parse ! mp4mux ! \
    filesink location="/home/totolia/testout.mp4" -e

I am convinced that this second pipeline is not quite correct since I use nvvidconv twice

I can’t deny the deception I’m having with the lack of information on nvidia frameworks. Understanding anything is almost an entire reverse engineering project, it took me about 1 hour to find how to download an example of nvivafilter.

Edit 2:
He found the problem with Cairo, but I don’t know how to fix it.
Nvidia returns me an array of chars with RGBA byte order, the problem is that cairo takes that array assuming it is 32bit data and nvidia is little-endian.

So I have an RGBA vector, cairo reads in ARGB (little endian) and this finally causes the order to end up being BGRA.
How can I solve this problem without backing up data?

I had only quickly tried and sure it would require some extra work, but if this might help you may try this.

Yes, he has used this same pipeline. But there are the same RGBA and ARGB problems.
The error is the following:
The frames that reach post-processing are sorted in unsigned char (RGBA).
The jetson nano being little endian if you cast to uint32, the order of each pixel is: ABGR.
And when cairo reads it, it interprets it as ARGB.

I only see two options, to do a channel swap before moving it to cairo. Or draw in cairo in bgr.
The problem is that, the first case you have memory copies. And the second is somewhat dirty …

Hi,
If you set

... ! nvivafilter ! 'video/x-raw(memory:NVMM),format=RGBA' ! ...

It should map to CAIRO_FORMAT_ARGB32. Same as demonstrated in the patch

Or you have different observation?

No, I have used the exact same code and that same call.
It’s not that it doesn’t work, it’s that the order of the channels is reversed. I have managed to draw everything I wanted but writing in BGR. That is to say:

red = 1.0
green = 0.0
blue = 0.0
cairo_set_source_rgba (cairo_context, blue, green, red, 1.0);

Functionally I have no problems, but obviously it would be more elegant to use the proper format.

Hi,
Thanks for the clarification. We understood the point. On Jetson platforms, the data format is RGBA. For 32-bit format, it supports ARGB:
Image Surfaces: Cairo: A Vector Graphics Library

So the channel does not match. Although it doesn’t match, for using the APIs, we can manually match it.