Visionworks cannot fetch the frame from /dev/video0 (tc358840)

Hello everybody,

We have JetsonTX1 dev board with tc358840 (from ZHAW), 24.1 version of v4lt is installed with correct drivers.

v4l2-ctl -V command output is:
Width/Height: 1920/1080
Pixel Format: ‘YUYV’
Field: None
Bytes per line: 3840
Size Image: 4147200
Colorspace: Broadcast NTSC/PAL (SMTPE170M/ITU601)

We try both camera and standard output 1920x1080@30fps.

Gstreamer gives correct output through command-line: gst-launch-1.0 v4l2src …, with appropriate parameters.

Sample Visionworks 1.4.3 application nvx_sample_player does not work for dev/video0. It starts and breaks as soon as first frame is fetched.

It appears that reason for this is that default image resolution (obtained with getConfiguration() function) from source is 320x200. Configuring source to correct parameters (1920x1080) gives error since source->fetch(frame) function again set frame to 320x200.

Maybe there could be problem with formats that Visionworks supports.

Did anyone try this with Visionworks?

Thanks
Voya

Hi Voya,

Could you run VisionWorks default sample first?

./bin/aarch64/linux/release/nvx_sample_player --source="device:///v4l2?index=0"

Or

./bin/aarch64/linux/release/nvx_sample_player --source="device:///v4l2?index=1"

Hi AastaLLL,

thanks for response,

we try sample application as you suggested with USB source and it is working well. It also had been working well with Nvidia CSI camera. Problem appears only with HDMI2CSI board and application behavior is as I described above: it starts with resolution of 320x200 and breaks as soon as first frame is fetched. It breaks in the way application could not be killed with the kill command.

We are trying to build gstreamer-1.0 appsink component that will fetch frame from HDMI2CSI input.

Hi,

Could you try v4l2 control first?

v4l2-ctl -d /dev/video1 --set-fmt-video=width=1920,height=1080,pixelformat=YUYV --stream-count=1 --stream-to=ov.raw

Hi,

I tried to run the sample player with the HDMI2CSI board (a 2160p source was connected to HDMI-In A) and used the following command:

DISPLAY=:0 ./nvx_sample_player --source="device:///v4l2?index=0"

The application did run, but only in 320x200 resolution.

I used the prebuilt image for L4T 24.2.1 and corresponding VisionWorks 1.5.3.

Since the tc358840 is not a camera, but a HDMI-to-CSI bridge, it can not capture at any resolution but only those compatible with the HDMI source. Is it possible to just fix the resolution to e.g. 1080p?

Unfortunately I can not compile the samples, otherwise I could test this.

I get this error:

$ cd /home/ubuntu/VisionWorks-1.5-Samples
$ make
make[1]: Entering directory '/home/ubuntu/VisionWorks-1.5-Samples/samples/nvgstcamera_capture'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/ubuntu/VisionWorks-1.5-Samples/samples/nvgstcamera_capture'
make[1]: Entering directory '/home/ubuntu/VisionWorks-1.5-Samples/samples/object_tracker_nvxcu'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/ubuntu/VisionWorks-1.5-Samples/samples/object_tracker_nvxcu'
make[1]: Entering directory '/home/ubuntu/VisionWorks-1.5-Samples/samples/opencv_npp_interop'
g++ -Wl,--allow-shlib-undefined -pthread -Wl,-rpath=/usr/local/cuda-8.0/lib64 -o ../../bin/aarch64/linux/release/nvx_sample_opencv_npp_interop obj/release/main_opencv_npp_interop.o obj/release/alpha_comp_node.o -L"/usr/lib"  -L/usr/local/cuda-8.0/lib64 -lcudart -L/usr/local/cuda-8.0/lib64 -lnppi -lnppc -L/usr/local/cuda-8.0/lib64 -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_video -lopencv_videostab -lopencv_detection_based_tracker -lopencv_esm_panorama -lopencv_facedetect -lopencv_imuvstab -lopencv_tegra -lopencv_vstab -lcufft -lnpps -lnppi -lnppc -lcudart -latomic -ltbb -lrt -lpthread -lm -ldl -lvisionworks -lnvxio 
obj/release/main_opencv_npp_interop.o: In function `main':
main_opencv_npp_interop.cpp:(.text.startup+0x258): undefined reference to `cv::imread(std::string const&, int)'
main_opencv_npp_interop.cpp:(.text.startup+0x268): undefined reference to `cv::imread(std::string const&, int)'
collect2: error: ld returned 1 exit status
Makefile:145: recipe for target '../../bin/aarch64/linux/release/nvx_sample_opencv_npp_interop' failed
make[1]: *** [../../bin/aarch64/linux/release/nvx_sample_opencv_npp_interop] Error 1
make[1]: Leaving directory '/home/ubuntu/VisionWorks-1.5-Samples/samples/opencv_npp_interop'
Makefile:31: recipe for target 'samples/opencv_npp_interop/Makefile.pr_build' failed
make: *** [samples/opencv_npp_interop/Makefile.pr_build] Error 2

Hi Kamm,

So we have similar behavior. Did application shows image or just crush? In our case it freezes.

Considering the make error here it seems you didn’t install OpenCV4Tegra, but for sample player it doesn’t matter.
You can try to go to /home/ubuntu/VisionWorks-1.5-Samples/sample/player and try make there. It should build sample to the /home/ubuntu/VisionWorks-1.5-Samples/bin/aarch64/linux/release/nvx_sample_player

@AaastaLLL: We’ve tried to change control through v4l2 but the same thing with 320x200 occured. As Kamm said HDMI2CSI is interface and v4l2-ctl shows that values for /dev/video0 are: 1920/1080, YUYV and colorspace Broadcast NTSC/PAL (SMTPE170M/ITU601). Gstreamer-1.0 works fine, maybe Visionworks createDefaultFrameSource(std::string uri) function somehow cannot handle this colorspace.

For me the application works (shows an image) and when i close it using the mouse (not ctrl+c in the terminal), then it ends without freezing.

Finally I found a way to compile the samples, following this thread: [1]

Also I found a way to capture 1920x1080 with the HDMI2CSI board. But it runs only at 4.5 FPS.

To replicate this, you need to get the source files for nvxio as described in [1]. If you dont have the build problem, you don’t have to change the Makefile as described in [1], I think.

I applied this change:

$ diff ./nvxio/src/FrameSource/GStreamer/GStreamerCameraFrameSourceImpl.cpp ../../libvisionworks-nvxio-1.5.3.71n/nvxio/src/FrameSource/GStreamer/GStreamerCameraFrameSourceImpl.cpp
233,234c233
< //    caps_string += "};";
<     caps_string += "}, width=1920, height=1080, framerate=30/1;";
---
>     caps_string += "};";

Then rebuild the package and install it:

sudo apt-get remove --purge libvisionworks-nvxio libvisionworks-nvxio-dev
dpkg-buildpackage -j4 -b
sudo dpkg -i ../libvisionworks-nvxio_1.5.3.71n_arm64.deb ../libvisionworks-nvxio-dev_1.5.3.71n_all.deb

The slow performance (4.5 FPS) is probably due to some (possibly unnecessary) format conversions that are hidden somewhere in nvxio. Also I noticed that the configuration.format appears to be 0 and thus is set to RGBA. With the tc358840 we capture UYVY, but setting this resulted in an error for me. It looks like nvxio does not like to process UYVY.

I executed sample_player as described above:

DISPLAY=:0 ./nvx_sample_player --source="device:///v4l2?index=0"
VisionWorks library info:
         VisionWorks version : 1.5.3
         OpenVX Standard version : 1.1.0

NO PROCESSING
Display Time : 6.18034 ms

NO PROCESSING
Display Time : 15.3313 ms

NO PROCESSING
Display Time : 337.748 ms

NO PROCESSING
Display Time : 337.048 ms

....

[1]
https://devtalk.nvidia.com/default/topic/966793/build-of-visionworks-samples-failed-because-of-undefined-references-to-nvxio/

Small update:
It appears that visionworks requires RGBA data. The conversion is done in GStreamer with videoconvert on the CPU, which is expensive.

(For comparison: The following pipeline converts from UYVY to RGBA with videoconvert and is around 3-4 FPS.)

gst-launch-1.0 v4l2src device=/dev/video2 io-mode=2 ! 'video/x-raw, width=1920, height=1080, framerate=30/1, format=UYVY' ! videoconvert ! 'video/x-raw, format=RGBA' ! nvoverlaysink sync=false

Here is a debug image of the GStreamer pipeline created that goes into an Appsink (into Visionworks).

$ diff ./nvxio/src/FrameSource/GStreamer/GStreamerCameraFrameSourceImpl.cpp ../../libvisionworks-nvxio-1.5.3.71n/nvxio/src/FrameSource/GStreamer/GStreamerCameraFrameSourceImpl.cpp
58,59c58                                                                                                                                                                                       
<                  (params.format == NVXCU_DF_IMAGE_NONE)||
<                  (params.format == NVXCU_DF_IMAGE_UYVY));
---
>                  (params.format == NVXCU_DF_IMAGE_NONE));
94,95d92
<  
<     g_object_set(G_OBJECT(v4l2src), "io-mode", 2, nullptr);
145c142
<     stream << "video/x-raw, format=(string){UYVY}, width=[1," << configuration.frameWidth <<
---
>     stream << "video/x-raw, format=(string){RGB}, width=[1," << configuration.frameWidth <<
236,239c233
< //    caps_string += "};";
<     caps_string += "}, width=1920, height=1080, framerate=30/1;";
< 
<     std::cout << "caps_string" << caps_string << std::endl;
---
>     caps_string += "};";
265c259
< GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_ALL, "pipeline");
---
>     // GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_ALL, "pipeline");

Hi Voja and Kamm,

Thanks for your reply and information. We are more clear about the problem now.
Will post information here once we have any update.

Thanks

Hi,

Could you help to paste camera mode information with this command:

v4l2-ctl -d /dev/video1 --list-formats-ext​
$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'YUYV'
        Name        : 

        Index       : 1
        Type        : Video Capture
        Pixel Format: 'YVYU'
        Name        : 

        Index       : 2
        Type        : Video Capture
        Pixel Format: 'UYVY'
        Name        : 

        Index       : 3
        Type        : Video Capture
        Pixel Format: 'VYUY'
        Name        : 

        Index       : 4
        Type        : Video Capture
        Pixel Format: '422P'
        Name        : 

        Index       : 5
        Type        : Video Capture
        Pixel Format: 'NV16'
        Name        : 

        Index       : 6
        Type        : Video Capture
        Pixel Format: 'NV61'
        Name        : 

        Index       : 7
        Type        : Video Capture
        Pixel Format: 'GREY'
        Name        :

I am using our custom drivers for the bridge (aka camera) IC tc358840 and the tegra VI from [1]. In this driver there is no /dev/video1 but only /dev/video0 for a 4K capable HDMI input (8 CSI lanes) and /dev/video2 for a 1080p capable HDMI input (4 CSI lanes). The v4l2-ctl output above is the same for /dev/video2.

[1]

Hi,

May I know do you write the device tree for tc358840 by your own?

Yes the device tree is customized to the driver. I have seen that in 24.2 Nvidia has also added the tc358840.c driver, but our version is a bit different. We are also not consistent with Nvidias dt-bindings for the tc358840.

Hi Kamm and AastaLLL

@Kamm Thanks, your guide helps us to get stream from HDMI2CSI interface. Our processing time is 150-160ms (~6.4 fps).

@AastaaLLL It would be great if somehow this conversion could be sent to GPU conversion pipeline, maybe you can suggest some guidelines how to try it

Thanks

Hi all,

Good news. We manage to push conversion to GPU. Soon I will come back with details :)

Hi,

Paste device tree for your reference.

/*
 * Copyright (c) 2015-2016, NVIDIA CORPORATION.  All rights reserved.
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
 * more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program.  If not, see <http://www.gnu.org/licenses/>.
 */
#define CAM1_RST_L	TEGRA_GPIO(S, 5)

/* camera control gpio definitions */

/ {
	host1x {
		vi {
			num-channels = <1>;
			ports {
				#address-cells = <1>;
				#size-cells = <0>;
				port@0 {
					reg = <0>;
					tc358840_vi_in0: endpoint {
						csi-port = <0>;
						bus-width = <8>;
						remote-endpoint = <&tc358840_out0>;
					};
				};
			};
		};

		i2c@546c0000 {
			tc358840@0f {
				compatible = "toshiba,tc358840";
				reg = <0x0f>;
				status = "okay";

				/* Sensor Model */
				sensor_model ="tc358840";

				reset-gpios = <&gpio TEGRA_GPIO(S, 4) GPIO_ACTIVE_HIGH>;
						interrupt-parent = <&gpio>;
				interrupts = <TEGRA_GPIO(X, 1) GPIO_ACTIVE_HIGH>;

				refclk_hz = <48000000>; /* 40 - 50 MHz */

				ddc5v_delay = <1>;        /* 50 ms */

				/* HDCP */
				/* TODO: Not yet implemented */
				enable_hdcp = <0>;

				/* CSI Output */
				csi_port = <3>;            /* Enable TX0 and TX1 */

				lineinitcnt = <0x00001770>;
				lptxtimecnt = <0x00000007>;
				tclk_headercnt = <0x00320207>;
				tclk_trailcnt = <0x00040005>;
				ths_headercnt = <0x000d0008>;
				twakeup = <0x00004e20>;
				tclk_postcnt = <0x0000000a>;
				ths_trailcnt = <0x000d0009>;
				hstxvregcnt = <0x00000020>;
				btacnt = <0x00050004>;

				/* PLL */
				/* Bps per lane is (refclk_hz / (prd + 1) * (fbd + 1)) / 2^frs */
				pll_prd = <9>;
				pll_fbd = <199>;
				pll_frs = <0>;

				ports {
					#address-cells = <1>;
					#size-cells = <0>;

					port@0 {
						reg = <0>;
						tc358840_out0: endpoint {
							csi-port = <0>;
							bus-width = <8>;
							remote-endpoint = <&tc358840_vi_in0>;
						};
					};
				};
			};
		};
	};

	gpio: gpio@6000d000 {
		camera-control {
			gpio-input = <
				TEGRA_GPIO(X, 1)
				>;
			gpio-output-low = <
				CAM1_RST_L
				TEGRA_GPIO(S, 4)
				>;
		};
	};
	tegra-camera-platform {
		compatible = "nvidia, tegra-camera-platform";

		/**
		* The general guideline for naming badge_info contains 3 parts, and is as follows,
		* The first part is the camera_board_id for the module; if the module is in a FFD
		* platform, then use the platform name for this part.
		* The second part contains the position of the module, ex. “rear” or “front”.
		* The third part contains the last 6 characters of a part number which is found
		* in the module's specsheet from the vender.
		*/
		modules {
		};
	};
};

Hi everyone,

Here I will describe the workaround for HDMI2CSI (tc358840) module to fetch the frame from source and convert frame from YUV to RGBA on GPU. For this purpose we use OpenMAX implementation for TX1. We’ve added new class to GStreamerCameraFrameSourceImpl.cpp, named GStreamerCameraOpenMAXSorceImpl. This class implements gstreamer path to fetch the frame in YUV and convert it to RGBA over nvvidconv. It works with VisionWorks 1.4.3 and camera source: 1920x1080@30fps (BlackMagic 4k Micro Camera) for now.

We modify three files:

  1. ~/VisionWorks/libvisionworks-nvxio-/src/FrameSource/GStreamer/GStreamerCameraFrameSourceImpl.hpp
  2. ~/VisionWorks/libvisionworks-nvxio-/src/FrameSource/GStreamer/GStreamerCameraFrameSourceImpl.cpp
  3. ~/VisionWorks/libvisionworks-nvxio-/src/FrameSource/FrameSource.cpp

Modifications are showed bellow:

  1. GStreamerCameraFrameSourceImpl.hpp
34a35
> #include "GStreamerEGLStreamSinkFrameSourceImpl.hpp"
  1. GStreamerCameraFrameSourceImpl.cpp - added new class implementing nvvideoconv as well as needed headers. This part should do same as command:
    gst-launch-1.0 v4l2src ! ‘video/x-raw, width=1920, height=1080, framerate=30/1, format=UYVY’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1, format=I420’ ! nvvideosink
> #include "GStreamerOpenMAXFrameSourceImpl.hpp" //added include for OpenMAX
>
> #include "NVXIO/Utility.hpp" //added
> #include "NVXIO/Application.hpp" //added
> 
> 
> //------Added camera /dev/video0 for HDMI2CSI--------
> GStreamerCameraOpenMAXFrameSourceImpl::GStreamerCameraOpenMAXFrameSourceImpl(vx_context vxcontext) :
>     GStreamerEGLStreamSinkFrameSourceImpl(vxcontext, FrameSource::VIDEO_SOURCE, "GStreamerCameraOpenMAXFrameSource", true)
>     //fileName(filename)
> {
> }
> 
> //-----------Added camera /dev/video0 for HDMI2CSI-----------
> 
> bool GStreamerCameraOpenMAXFrameSourceImpl::InitializeGstPipeLine()
>     GstStateChangeReturn status;
>     end = true;
> 
>     pipeline = GST_PIPELINE(gst_pipeline_new(NULL));
>     if (pipeline == NULL)
>     {
>         NVXIO_PRINT("Cannot create Gstreamer pipeline");
>         return false;
>     }
> 
>      bus = gst_pipeline_get_bus(GST_PIPELINE (pipeline));
> 
>     // create v4l2src
>     GstElement * v4l2src = gst_element_factory_make("v4l2src", NULL);
>     if (v4l2src == NULL)
>     {
>         NVXIO_PRINT("Cannot create v4l2src");
>         FinalizeGstPipeLine();
> 
>         return false;
>     }
> 
>     std::ostringstream cameraDev;
>     cameraDev << "/dev/video0";
>     g_object_set(G_OBJECT(v4l2src), "device", cameraDev.str().c_str(), NULL);
> 
>     gst_bin_add(GST_BIN(pipeline), v4l2src);
> 
> 
>     // create nvvidconv
>     GstElement * nvvidconv = gst_element_factory_make("nvvidconv", NULL);
>     if (nvvidconv == NULL)
>     {
>         NVXIO_PRINT("Cannot create nvvidconv");
>         FinalizeGstPipeLine();
> 
>         return false;
>     }
> 
>     gst_bin_add(GST_BIN(pipeline), nvvidconv);
> 
>     // create nvvideosink element
>     GstElement * nvvideosink = gst_element_factory_make("nvvideosink", NULL);
>     if (nvvideosink == NULL)
>     {
>         NVXIO_PRINT("Cannot create nvvideosink element");
>         FinalizeGstPipeLine();
>         return false;
>     }
> 
>     g_object_set(G_OBJECT(nvvideosink), "display", context.display, NULL);
>     g_object_set(G_OBJECT(nvvideosink), "stream", context.stream, NULL);
>     g_object_set(G_OBJECT(nvvideosink), "fifo", fifoMode, NULL);
> 
>     gst_bin_add(GST_BIN(pipeline), nvvideosink);
> 
> //HDMI2CSI
>     std::ostringstream stream;
>     stream << "video/x-raw, format=UYVY, width=1920, height=1080, framerate=30/1;";
> //TODO: format, resolution and framerate should be configurable
> 
>     std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_v42lsrc(gst_caps_from_string(stream.str().c_str()));
>     
>     if (!caps_v42lsrc)
>     {
>         NVXIO_PRINT("Failed to create caps");
>         FinalizeGstPipeLine();
> 
>         return false;
>     }
> 
>     // link elements
>     if (!gst_element_link_filtered(v4l2src, nvvidconv, caps_v42lsrc.get()))
>     {
>         NVXIO_PRINT("GStreamer: cannot link v4l2src -> color using caps");
>         FinalizeGstPipeLine();
> 
>         return false;
>     }
> 
> 
>     std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_nvvidconv(
> 	//HDMI2CSI	
> 	gst_caps_from_string("video/x-raw(memory:NVMM), format=(string){I420}, width=1920, height=1080, framerate=30/1"));
>       //TODO: framerate and resolution should be configurable
> 
>     // link nvvidconv using caps
>     if (!gst_element_link_filtered(nvvidconv, nvvideosink, caps_nvvidconv.get()))
>     {
>         NVXIO_PRINT("GStreamer: cannot link nvvidconv -> nvvideosink");
>         FinalizeGstPipeLine();
> 
>         return false;
>     }
> 
>     // Force pipeline to play video as fast as possible, ignoring system clock
>     gst_pipeline_use_clock(pipeline, NULL);
> 
>     status = gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);
> 
>     handleGStreamerMessages();
>     if (status == GST_STATE_CHANGE_ASYNC)
>     {
>         // wait for status update
>         status = gst_element_get_state(GST_ELEMENT(pipeline), NULL, NULL, GST_CLOCK_TIME_NONE);
>     }
>     if (status == GST_STATE_CHANGE_FAILURE)
>     {
>         NVXIO_PRINT("GStreamer: unable to start playback");
>         FinalizeGstPipeLine();
> 
>         return false;
>     }
> 
>     // GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_ALL, "gst_pipeline");
> 
>     if (!updateConfiguration(nvvidconv, configuration))
>     {
>         FinalizeGstPipeLine();
>         return false;
>     }
> 
>     end = false;
> 
>     return true;
> }
> //--------------end HDMI2CSI-------------------
  1. FrameSource.cpp - in ordrer to start our workaround implementation we change this file:
306c306
<             return makeUP<GStreamerCameraFrameSourceImpl>(context, static_cast<uint>(idx));
> 	      return makeUP<GStreamerCameraOpenMAXFrameSourceImpl>(context); //index is already 0 in nvvideoconv implementation

After this rebuild VisionWorks.

Greetings and
Marry Christmas and Happy New Year :)

Hi AastaLLL,

Is your TC358840 hardware physically connect to which CSI ports? since for 4K, it need dual 4 lanes right? How can I config if I physically connect to csi ports A(0 1) B(2 3), it seem your connect to csi-port <0> with bus-width=<8>, it your situation same as mine? I only get 1080P so for on current setting, and I can not find any instruction for gang mode on TX1 yet.

Which L4T release are you testing with, and also the driver for TC358840? Thank you.

The interrupt request for this TC358840 driver seems not work property, it return with no problem, but its IRQ callback function is never called, do have one know the right way to enable IRQ for this device?

For the driver, I added vi port on host1x to DT file “tegra210-jetson-tx1-p2597-2180-a01-devkit.dts”, and then linked the with tc358840’s output port, it seems the driver works ok except the interrupt.

Thank you.
Kclet

Hi,

Sorry for the late reply.
Could you check if the interrupt is set to be IRQ callback and EGRA_GPIO(X, 1) ping is well programmed to accept interrupt?