Hello, I’m new to nvidia development and linux as well, but a developer with many years of experience is within us. We will buy and use Xavier dev kit for use with a pcie x8 camera, but before that, I want to see how the nvidia dev IDE works. Experience in using open cv (under windows under c++ via ms visual studio).
What I want to know is, without having the xavier dev kit:
Is there no sample that I can run on nvidia nsight eclipse ?
What’s the difference between nsight eclipse and nsight systems ?
How to dev (better use some image samples) via opencv in nvidia ide under linux (nsight eclipse¿?) ?
When out xavier dev kit arrives:
What will be the one of the best approaches IDE and language and VisualAPI, to develop in order to display and record video and image sequences from a pcie 8x camera connected to xavier dev kit, to the m.2 drive that will be used, better if can realtime compress the images(sequence) or video and store as a file, in say png’s, and for video say using some codec like h264 and h265. Resolutions up to 10K.
Hi,
On Jetson platforms, we support two frameworks: gstreamer and tegra_multimedia_api. Not sure what interface the pcie 8x camera supports. IF it supports v4l2 interface, you may refer to 12_camera_v4l2_cuda, which demonstrates how to do camera catpure through v4l2 software stacks.
Thsnks,I downloaded the mulvimedia samples.
Through which IDE do I must better open for see, compile and run the samples ?
Sorry I came from visual studio and project files.
Hi,
Currently We don’t support cross-compiling. We have sdkmanger to install all packages to Jetson platforms. You would need to have a platform first.
Please check first if the PCIe x8 camera supports v4l2. If it can do frame capture through v4l2, it should be good to use tegra_multimedia_api. Also please check the resolution and framerate. A general case is 4Kp30(3840x2160) YUV422(YUYV, UYVY, …)
sorry Im a litle newbie in Linux and a bit in C. Before buying the xavier dev kit, I want to compile and run some of the multimedia sdk samples in my linux computer, using ubuntu as required by xavier.
take me to the jpg encode decode samples in the tegra multimedia api, there are a few files and a makefile. How can I compile these tegra multimedia api samples under linux ?
Is the Nvidia nsight Eclipse a nice ide tool for ?
So I understand that is only possible to compile and run the samples on the jetson devices, right ?
Ok, so, if I buy a jetson xavier dev kit, will you be able to help me in order to compile and run the basic tegra multimedia api samples on the xavier dev kit board, and see how the samples work and whicj results produce connecting an hdmi tv to the xabier dev kit ?
Hi,
We have sdkmanager that installs all packages and compiles tegra_multimedia_api samples on target. Please check
Before buying Xavier, please check below prerequisite first: Please check first if the PCIe x8 camera supports v4l2. If it can do frame capture through v4l2, it should be good to use tegra_multimedia_api. Also please check the resolution and framerate. A general case is 4Kp30(3840x2160) YUV422(YUYV, UYVY, …)
If the PCIe x8 camera input does not support v4l2, please share what interface is supported for camera frame capturing. Using tegra_multimedia_api is an optimal solution of v4l2 framework on Xavier. For other software framework, we need to have more information and evaluate if it can get optimal performance before you buy Xavier.
"
We provide RAW data directly through out API. We do not support v4l2.
As for the software, we offer our SDK which includes the drivers, API (C/C++) and some examples for Windows, Linux, MacOS:
For some basic evaluation and rapid-prototyping we offer our new cross-platform viewer application “XIMEA CamTool”, which can be extended by custom plugins:
Hi,
By checking the webpages, it looks like there is no existing implementation to directly run the camera on Xavier. It shall take certain effort in customization. Could you check with vendor if there is API to get the caprure frames in CUDA buffer? Hardware blocks on Jetson platforms use DMA buffers( software implementation is called NvBuffer ). If you are able to get the frame in CUDA buffer, you can implement CUDA code to put it into NvBuffer. This is a possible solution of using the camera.
Besides, don’t see what resolution is. The suggestion is not to exceed 4K 3840x2160.
Hi,
Hardware encoder on Xavier supports up to 4K. 8K is not supported. Xavier may not be suitable platform for the usecase and suggest you look for other platforms.