CSI camera with a high-quality image for Xavier NX

Please help me choose a CSI camera with a high-quality image for use in the “Gaze Demo for Jetson” container (NVIDIA NGC ).
Device: Jetson Xavier NX
Firmware (OS) Version: JetPack 4.4 Developer Preview [L4T 32.4.2]

I tested the camera based on the “Sony IMX219” controller:

Unfortunately, this camera showed poor image quality in low light, including dead pixels.

I also tried to launch the camera based on the “OmniVision OV5647” controller:
But, OV5647 controller is no longer supported:

Original forum thread: does Jetson Nano support CSI camera with sensor ov5647?
This camera model does not work on Jetson Xavier NX (the red LED on the camera board lights up, but the system did not detect the camera).

p.s. Thanks in advance, Eugene Eugene

Hey Violator86,
We’re using a Basler daA4200-30mci camera with the Xavier Dev Kit.

Resolution (HxV) 4208 px x 3120 px or 13MP

Not sure if it would work with the Gaze Demo. You have to install the drivers for it in order to use it.

By the way, do you know what the highest resolution and frame rate the Xavier NX will handle over the CSI input?


You can consult with camera partner.

Thanks for answers.
Perhaps I expressed it inaccurately, by “high quality” I meant a camera with a less noisy matrix than on the Raspberry Pi Camera v2.1, 1080p video resolution is quite enough for me. Are there cheaper cameras?

p.s. As a last resort, can you please advise a webcam that is compatible with the OS on Jetson without the need to patch and rebuild the kernel? As far as I understand, not every webcam works on Debian / Ubuntu / Kubuntu (I don’t know about this situation on other Linux families).

For USB camera you don’t need any patch should be working by plug and play.

Some webcams, which are identified as UVC devices, may not always work under Linux.
I tried to use the Canyon CNS-CWC6 webcam ( https://canyon.eu/product/cns-cwc6n ), but, unfortunately, the camera was not detected by the system on a Xavier NX, the same was observed on a netbook (Debian) and a PC (Kubuntu).

This camera ( Sony Starvis IMX415 4K MIPI Camera for Jetson Xavier NX/ Jetson Nano ) is compatible with the JetPack 4.4 Developer Preview [L4T 32.4 .2]?
Unfortunately on newer versions of JetPack I was unable to start the container.

Since JetPack 4.6, IMX477 works out of the box after configuring the board with sudo /opt/nvidia/jetson-io/jetson-io.py. You could try that. No kernel rebuild required and jetson-io handles the device tree changes for you.

As mentioned by @ShaneCCC pretty much any USB camera will also work.

Thank you for your answer.
I mentioned above that, unfortunately, I was unable to run “Gaze Demo for Jetson” container on newer JetPack versions than indicated in the list of requirements on the container page (JetPack 4.4 Developer Preview [L4T 32.4.2]).
I just need to be able to use the “Gaze Demo for Jetson” container, and I would not want to waste money on a camera that cannot be launched on the specified version of JetPack.

I tried to run a container on JetPack 4.6 today, unfortunately failed.

professorx@professorx-nx:~/GazeDetect$ sudo docker run --runtime nvidia -it --rm --network host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/jetson-gaze:r32.4.2 python3 run_gaze_sequential.py /videos/gaze_video.mp4 --loop --codec=h264
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=9144 /var/lib/docker/overlay2/ca30b3ec31dab1d8f28b4b7808a660ae44ba63794dca51a185438ec581132388/merged]
nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/ca30b3ec31dab1d8f28b4b7808a660ae44ba63794dca51a185438ec581132388/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.440.18: file exists: unknown.

professorx@professorx-nx:~/GazeDetect$ sudo find / -iname libnvidia-fatbinaryloader.so*
find: ‘/run/user/1000/gvfs’: Permission denied
find: ‘/run/user/120/gvfs’: Permission denied

That container is only for JetPack 4.4 Developer Preview (L4T R32.4.2). On JetPack 4.6 you need to run containers built for L4T R32.6.1.

For gaze detection, I recommend that you check out the newer GazeNet model on NGC: https://ngc.nvidia.com/catalog/models/nvidia:tlt_gazenet
It can be run through DeepStream or Triton Inference Server.

1 Like

Still not possible to launch a camera based on the “OmniVision OV5647” controller on JetPack 4.6? It’s just a pity that I made a mistake with the choice of the camera, and now it is unclaimed.☹️

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

Did you verify this sensor driver from host instead of container first?

Is this container not supported on JetPack 4.6?

When I tried to run scripts ( TAO Toolkit — TAO Toolkit 3.0 documentation /tao_cv_inf_pipeline/quick_start_scripts.html#tao-cv-quick-start-scripts ) to run the examples, I was getting an error that the libnvinfer.so.7 library was not found when I tried to launch tao_cv_init.sh (the error occurs at the launch stage of tao-converter)

I first wanted to run tao with a camera on an IMX219 controller to see how it works.
Then I will try to start the camera with the OV5647 and IMX477 controllers.

Could you share completely error message here to check.


Unfortunately, I can publish the output of the script execution not earlier than in the evening, since now I do not have access to Jetson.

The first block contains a fragment of the output of the tao_cv_init.sh script, the full output in the attached text file.

[INFO] Finished pulling containers and models
[INFO] Beginning TensorRT plan compilation with tao-converter...
[INFO] This may take a few minutes
[INFO] Using this location for models: /home/professorx/nvme/tao_cv_inference_pipeline_models
[INFO] Compiling Body Pose INT8 with key 'nvidia_tlt'...
[INFO] Compiling with Body Pose Width = 384...
[INFO] Compiling with Body Pose Height = 288...
[INFO] Looking for '/home/professorx/nvme/tao_cv_inference_pipeline_models/bodyposenet_vdeployable_v1.0/int8_calibration_288_384.txt'...
[INFO] Found it!
tao-converter: error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory
(gazeenv) professorx@x-mansion:~/gaze/scripts$
(gazeenv) professorx@x-mansion:~/gaze/scripts$ sudo find / -iname libnvinfer.so*
[sudo] password for professorx: 
find: ‘/run/user/1000/gvfs’: Permission denied
find: ‘/run/user/120/gvfs’: Permission denied
(gazeenv) professorx@x-mansion:~/gaze/scripts$
professorx@x-mansion:~/jetsonUtilities-master$ sudo python jetsonInfo.py
[sudo] password for professorx: 
NVIDIA Jetson Xavier NX (Developer Kit Version)
 L4T 32.6.1 [ JetPack UNKNOWN ]
   Ubuntu 18.04.5 LTS
   Kernel Version: 4.9.253-tegra
 CUDA 10.2.300
   CUDA Architecture: 7.2
 OpenCV version: 4.1.1
   OpenCV Cuda: NO
 Vision Works:
 VPI: ii libnvvpi1 1.1.12 arm64 NVIDIA Vision Programming Interface library

tao_cv_init.sh full output.txt (78.2 KB)

I was able to run TAO Pipeline l4t on JetPack 4.5.
But unfortunately I had a problem using the camera (Raspberry Pi 2.1, IMX219) in the examples of determining the gaze and emotions.

Available Sensor modes :
Resolution: 3264 x 2464 ; Framerate = 21.000000; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000

Resolution: 3264 x 1848 ; Framerate = 28.000001; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000

Resolution: 1920 x 1080 ; Framerate = 29.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000

Resolution: 1640 x 1232 ; Framerate = 29.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000

Resolution: 1280 x 720 ; Framerate = 59.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000

(NvCameraUtils) Error InvalidState: Mutex already initialized (in Mutex.cpp, function initialize(), line 41)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function open(), line 54)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 258)
(Argus) Error InvalidState: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 102)
ArgusV4L2_Open failed: Invalid argument
Unsupported buffer type
VIDEOIO ERROR: V4L: device /dev/video0: Unable to query number of channels
[Jarvis] [E] Could not open video source /dev/video0
demo_emotion_output.txt (12.9 KB)


# Path to device handle; ensure config.sh can see this handle

# Boolean indicating whether handle is pre-recorded

# Desired resolution (width, height, channels)

# Show visualization

# Leverage API to send single decoded frames to Pipeline

On the host and in the container, the camera works great:

gst-launch-1.0 nvarguscamerasrc sensor_mode=0 ! ‘video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12’ ! nvvidconv flip-method=0 ! ‘video/x-raw,width=960, height=616’ ! nvvidconv ! nvegltransform ! nveglglessink -e

Perhaps I am doing something wrong?


The container is for JetPack 4.5.x.
You can find this information from the tag name.

For example,
tao-cv-inference-pipeline-l4t:r32.5.0-v0.3-ga-server-utilities indicates it is compatible to the r32.5.0, which is for JetPack4.5.x.

The container for JetPack4.6 is not release yet.
Please wait for our announcement.



Have you enable the access for container.

For example:

sudo docker run ... --device /dev/video0 ...