Use of Stereolabs ZED camera on Jetson Xavier (Solved)

Hello Everyone,
I’m trying to get xavier system ready to use Stereo Labs ZED camera, as on the Wiki of this device it’s mentioned that this camera system is compatible with it.
First step to start using it in ROS environment, I need to have ZED SDK installed onto the system which have support only for the Jetson TX1/2. ZED SDK while installation gives me error that Cuda version is not supported, but I doubt that whether lower versions of cuda would support xavier for it’s full functionality.
I’m pretty sure that someone would have definitely tried using ZED on xavier before, would really appreciate help from them.

I would appreciate if someone suggest me any other alternatives equivalent to ZED camera.

Thank you,
Amey Hande

Looking at this, it seems Xavier support is not yet available (but coming soon) for ZED.

Hi Amey, you can capture the ZED’s video feed through V4L2, and as HP mentioned Stereolabs is working on the build for ZED SDK for Xavier.

In the meantime you could take the stereo video from V4L2 and perform GPU-accelerated stereo disparity mapping in VisionWorks, OpenCV, or with stereoDNN from the Redtail project.

The sdk for the ZED is not compatible with cuda 10 on the Xavier. Last I heard it was supposed to be out around the end of the month.

Have anyone managed to get 2kHD resolution at Xavier? In my case the sdk returns ’ thus platform doesn’t support’ when I select the 2kHD.

I prefer higher frame rate, so I never go that high. The other resolutions work fine for me.
I guess if it doesn’t work, it doesn’t work?

How about playing :

./test-launch "v4l2src device=/dev/video1 ! video/x-raw,format=YUY2 ! nvvidconv ! video/x-raw(memory:NVMM),width=2560,height=720,format=I420 ! omxh265enc ! rtph265pay name=pay0 pt=96"

any solutions?

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! decodebin ! videoconvert ! xvimagesink

should be able to play it on the same Xavier.

Worked with ZED and:

./test-launch "v4l2src device=/dev/video1 ! video/x-raw, format=YUY2 ! nvvidconv ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"

or

./test-launch "v4l2src device=/dev/video1 ! video/x-raw, format=YUY2, width=4416, height=1242, framerate=15/1 ! nvvidconv ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"

Thank you.
It got sorted out that I rather used ssh than local connection, and with local Xavier it works fine.

as long as it works for local xavier, it should work as well for remote like:

ssh name@jetson_ip -L 8554:localhost:8554

and then calling playing at the local port of another device

The main problem with a ssh connection is that the DISPLAY is either not set at all, or is set to point across the network at your laptop/workstation screen, not the Xavier. The Xavier needs DISPLAY to point at its own GPU for all the software to work right.

In a shell (or in your .bashrc) you can do this to make it work even when logging in over ssh:

export DISPLAY=:0

@snarky, thank you for pointing out!
However,to get a port of a remote machine mapped to a local machine. takes for example:
having computer A that broadcasts on 8554 port and has ip address 1.2.3.4
and computer B that has ip address 5.6.7.8
then at computer B is executed:

ssh name@1.2.3.4 -L 8554:localhost:8554

allows to play videostream via accessing local port of computer B, without using ssh to computer A, but for the loop function:

vlc rtsp://127.0.0.1:8554

or

vlc rtsp://5.6.7.8:8554

the comand above executed from computer B will play at computer B [ lets assume it is accessed physically with keyboard and display], it will play at computer B the stream generated at computer A and broadcasted to 8554 port of the computer A.
That happens because of the network port mapping that s implemented avove via ssh.
What I am trying to do is to aggregate at one device multiple streams of network jetsons.
The design above allows to jet a jetson where ports e.g 8553, 8554,8555 will show streams from multiple network devices, but at one device [ e.g computer B]

Each Jetson would need its DISPLAY to that local Jetson (even if it is a virtual server). After that streams could converge on a single node…and if that node needs to manipulate anything, then it would need its own DISPLAY. @snarky is correct, but the idea needs to extend to several independent systems…and then again to the system doing the aggregation.

Exactly.
However, I by default assume that Jetson with conencted keyboard and display uses DISPLAY=:0, doesn’t it? And if itworks without exporting it, then it probably does, in my opinion. But when it fails I would use the command to resolve issues. As I undestand the command attaches a certain Display [ in case of multi display configuration it could be for example a specific display to run specific graphical application at it from command line]. I mean expicit assignment of a display to application can be done with that command when required. But it seems to work by default somehow.
Probably default ssh -X connection to a remote device for example, would return graphical output to client computer, while after executing export 0 it would display the application at the remote server to that connection is established. But, if it will after establishing of the connection, execute at the remote server - then I wouldn’t know how to explicitly say to display the graphics at a local client instead of showing it at distant server.
But if a Jetson just runs gstreamer server and even do not have a display, and another jetson has display and running gstreamer reciever, then at the former device might be no need to specify display, in my opinion. As it wouldn’t display anything, but would just stream to network port, as it seems to me from that the latter device will read.

The Xavier (R31.x) uses “:1” for the HDMI display; other Jetsons use “:0”. The whole “:number” scheme is more or less just convention to normally start with “:0”. Each number represents a running server instance, but there is no issue with missing numbers (numbers have no requirement to be in order and have no specific meaning other than to say an X server has that context). I’ve seen virtual displays often using “:10” and the base actual system display using “:0”.

As an example, if you log in via GUI to an Xavier (nvidia account), then from a remote system run ssh to the Xavier (without any kind of X forwarding), followed by “export DISPLAY=:1” and “xterm”, then the xterm will pop up on the desktop.

In some cases multi-monitor can be set up with each monitor being independent sessions, or with two monitors sharing a desktop. An example of one way to share two monitors is for server instance “:0” via “:0.0” and “:0.1” representing each monitor. If the video driver is treating the two monitors as a single monitor, then it will all be “:0” (but a larger buffer).

In the past remote sharing was done over TCP, and the host was listed to the left of the “:0”. For many years now TCP has been disabled by default and a mechanism such as “ssh -X” for X forwarding has taken its place. Any time you do that all of the GPU work is done on the system which is doing the display. This is why I suggest each Xavier runs a virtual server (to do work on its GPU…which is what @snarky was saying), and then in some way broadcast results other than via X event forwarding.

Thank you for expanded answer, I have only usb-c and vga displays. However, it turned out to me that usb-c is probaby on :0, by default.

Case I:

When I ssh from client [ any computer device inluding xavier, jetson tx2 or host pc] to Xavier A and execute opencv video [at Xavier A over terminal connection from the client]

./simple_opencv

it would play at the device from that I executed ssh [ with X parameter].
In my opinion it would use GPU of Xavier A in that case rather than GPU of the client computer

Case II:

But when I execute [ after logging in via ssh to Xavier A]

export DISPLAY=:0

and then execute

./simple_opencv

that would appear at usb-c display of Xavier A.
And that defenitely uses Xavier A GPU.

It seems to me that in both cases above Xavier GPU is used to process the video. Isn’t it? And in neither of them clients GPU is used. But for the fact that it is displayed at associated monitor.

Could you extend the suggestion of running virtual server considering the context above, please?
Perhaps it makes sense to speak of utilization of two GPU per execution of
./sample_opencv at Xavier A. In case it is displayed at client’s display. Or about utilization of only Xavier’s GPU in case it is displayed at Xavier connacted display.
But I can see no way to get only Client’s GPU utilized while executing video at remote Xavier A device, or vice versa, of utilization of only Xavier’s GPU while playing the video at the display of client’s associated monitor.

Case III:

Moreover, since recent I managed to get experience with x2x that allows to associate displays of remote devices to any computer in the network. I never figured out the numbering idea completely but it works somehow when I execute:

ssh -XC user@laptop  x2x -east -to :0.0

. And that allows to extend Xavier A mouse and display to client’s device or vie versa, as if they have all displays conencted in extended mode to same computer.

Update:
Upon checking it turned out that I need to specify -X,
or otherwise video will not be passed, but for custom implementation of delivery.