TX2 Camera not working?

Hi all,

Seem to be having issues just getting some output from the camera?! - see below?!

ubuntu@tegra-ubuntu:~$ nvgstcapture-1.0
Encoder null, cannot set bitrate!
Encoder Profile = Baseline
Supported resolutions in case of CSI Camera
(2) : 640x480
(3) : 1280x720
(4) : 1920x1080
(5) : 2104x1560
(6) : 2592x1944
(7) : 2616x1472
(8) : 3840x2160
(9) : 3896x2192
(10): 4208x3120
(11): 5632x3168
(12): 5632x4224

Runtime CSI Camera Commands:

Help : ‘h’
Quit : ‘q’
Set Capture Mode:
(1): image
(2): video
Get Capture Mode:
Set Sensor Id (0 to 10):
sid: e.g., sid:2
Get Sensor Id:
Set sensor orientation:
(0): none
(1): Rotate counter-clockwise 90 degrees
(2): Rotate 180 degrees
(3): Rotate clockwise 90 degrees
Get sensor orientation:
Set Whitebalance Mode:
(0): off
(1): auto
(2): incandescent
(3): fluorescent
(4): warm-fluorescent
(5): daylight
(6): cloudy-daylight
(7): twilight
(8): shade
(9): manual
Get Whitebalance Mode:
Set Scene-Mode:
(0): face-priority
(1): action
(2): portrait
(3): landscape
(4): night
(5): night-portrait
(6): theatre
(7): beach
(8): snow
(9): sunset
(10): steady-photo
(11): fireworks
(12): sports
(13): party
(14): candle-light
(15): barcode
Get Scene-Mode:
Set Color Effect Mode:
(1): off
(2): mono
(3): negative
(4): solarize
(5): sepia
(6): posterize
(7): aqua
Get Color Effect Mode:
Set Auto-Exposure Mode:
(1): off
(2): on
(3): OnAutoFlash
(4): OnAlwaysFlash
(5): OnFlashRedEye
Get Auto-Exposure Mode:
Set Flash Mode:
(0): off
(1): on
(2): torch
(3): auto
Get Flash Mode:
Set Flicker Detection and Avoidance Mode:
(0): off
(1): 50Hz
(2): 60Hz
(3): auto
Get Flicker Detection and Avoidance Mode:
Set Contrast (0 to 1):
ct: e.g., ct:0.75
Get Contrast:
Set Saturation (0 to 2):
st: e.g., st:1.25
Get Saturation:
Set Exposure Time in seconds:
ext: e.g., ext:0.033
Get Exposure Time:
Set Auto Exposure Lock(0/1):
ael: e.g., ael:1
Get Auto Exposure Lock:
Set Edge Enhancement (0 to 1):
ee: e.g., ee:0.75
Get Edge Enhancement:
Set ROI for AE:
It needs five values, ROI coordinates(top,left,bottom,right)
and weight in that order
aer: e.g., aer:20 20 400 400 1.2
Get ROI for AE:
Set ROI for AWB:
It needs five values, ROI coordinates(top,left,bottom,right)
and weight in that order
wbr: e.g., wbr:20 20 400 400 1.2
Get ROI for AWB:
Set FPS range:
It needs two values, FPS Range (low, high) in that order
fpsr: e.g., fpsr:15 30
Get FPS range:
Set WB Gains:
It needs four values (R, GR, GB, B) in that order
wbg: e.g., wbg:1.2 2.2 0.8 1.6
Get WB Gains:
Set TNR Strength (0 to 1):
ts: e.g., ts:0.75
Get TNR Strength:
Set TNR Mode:
(0): NoiseReduction_Off
(1): NoiseReduction_Fast
(2): NoiseReduction_HighQuality
Get TNR Mode:
Capture: enter ‘j’ OR
followed by a timer (e.g., jx5000, capture after 5 seconds) OR
followed by multishot count (e.g., j:6, capture 6 images)
timer/multihot values are optional, capture defaults to single shot with timer=0s
Start Recording : enter ‘1’
Stop Recording : enter ‘0’
Video snapshot : enter ‘2’ (While recording video)
Set Preview Resolution:
pcr: e.g., pcr:3
(2) : 640x480
(3) : 1280x720
(4) : 1920x1080
(5) : 2104x1560
(6) : 2592x1944
(7) : 2616x1472
(8) : 3840x2160
(9) : 3896x2192
(10): 4208x3120
(11): 5632x3168
(12): 5632x4224
Note - For preview resolutions 4208x3120 and more use option --svs=nveglglessink
Get Preview Resolution:
Set Image Resolution:
icr: e.g., icr:3
(2) : 640x480
(3) : 1280x720
(4) : 1920x1080
(5) : 2104x1560
(6) : 2592x1944
(7) : 2616x1472
(8) : 3840x2160
(9) : 3896x2192
(10): 4208x3120
(11): 5632x3168
(12): 5632x4224
Get Image Capture Resolution:
Set Video Resolution:
vcr: e.g., vcr:3
(2) : 640x480
(3) : 1280x720
(4) : 1920x1080
(5) : 2104x1560
(6) : 2592x1944
(7) : 2616x1472
(8) : 3840x2160
(9) : 3896x2192
Get Video Capture Resolution:

Runtime encoder configuration options:

Set Encoding Bit-rate(in bytes):
br: e.g., br:4000000
Get Encoding Bit-rate(in bytes):
Set Encoding Profile(only for H.264):
ep: e.g., ep:1
(0): Baseline
(1): Main
(2): High
Get Encoding Profile(only for H.264):
Force IDR Frame on video Encoder(only for H.264):
Enter ‘f’

bitrate = 4000000
Encoder Profile = Baseline
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingSocket read error. Camera Daemon stopped functioning…
gst_nvcamera_open() failed ret=0

** (nvgstcapture-1.0:2488): CRITICAL **: <create_capture_pipeline:4497> can’t set camera to playing

** (nvgstcapture-1.0:2488): CRITICAL **: main:5290 Capture Pipeline creation failed
** Message: main:5297 Capture completed
** Message: main:5347 Camera application will now exit

Could you run v4l2-ctrl --list-devices to check the sensor driver is probe.

Ok so not much to report here…

ubuntu@tegra-ubuntu:~$ v4l2-ctrl --list-devices
-bash: v4l2-ctrl: command not found

I did try running sudo pip install v4l2 and sudo apt-get install v4l-utils

Are there any other packages i could be missing? I had assumed jetpack would install everything necessary to access the camera so if there are any post installation actions i should have done to enable the camera to work let me know?

The apt-get install v4l-utils can install the it. What the message when you install it.
Unmark some source from the /etc/apt/source.list and
sudo apt-get update
sudo apt-get install v4l-utils

Still can’t get it you can download the source from here to build it.


I do have it - it’s just not working, see below @shaneCCC

ubuntu@tegra-ubuntu:~ sudo apt-get install v4l-utils Reading package lists... Done Building dependency tree Reading state information... Done v4l-utils is already the newest version (1.10.0-1). The following packages were automatically installed and are no longer required: apt-clone archdetect-deb dmeventd dmraid dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 gstreamer1.0-plugins-bad-videoparsers kpartx kpartx-boot libass5 libavresample-ffmpeg2 libbs2b0 libdebian-installer4 libdevmapper-event1.02.1 libdmraid1.0.0.rc16 libflite1 libgstreamer-plugins-bad1.0-0 libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev liblockfile-bin liblockfile1 liblvm2app2.2 liblvm2cmd2.02 libparted-fs-resize0 libreadline5 libsodium18 libzmq5 lockfile-progs lvm2 os-prober pmount python3-icu python3-pam rdate ubiquity-casper ubiquity-ubuntu-artwork Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 373 not upgraded. ubuntu@tegra-ubuntu:~ v4l2-ctrl --list-devices
-bash: v4l2-ctrl: command not found

Also i have ran apt-get update both before and after installing v4l-utils

update* also tried below…

Now im wondering if there is a serious problem here ?

ubuntu@tegra-ubuntu:~ dmesg | grep -i ov5693 [ 3.121440] [OV5693]: probing v4l2 sensor. ubuntu@tegra-ubuntu:~ v4l2-ctl --list-devices
VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device
VIDIOC_QUERYCAP: failed: Inappropriate ioctl for device
vi-output, ov5693 2-0036 (platform:15700000.vi:2):

ok so its working now with flags…

nvgstcapture-1.0 -m 2 # set the mode in video

any solution? any demo for the onboard camera?

The camera works with those flags however the script i was using locally on my laptop is not working with the jetson (despite having installed open CV - note: not opencv4tegra!)

I’m now wondering if this can be done? I can use a USB camera if that makes this more viable however i would prefer to keep the USB port free and ideally run using the camera on deck.

My goal is to run the camera feed through a pre-trained classifier in real time. If anyone can comment on openCV and jetson using python that would be very helpful.

Found an answer on stack overflow that im going to try in case anyone else is struggling!

it will be about a week before i can do this for one reason or another. Would be keen to hear back from the mods at Nvidia :)

Hi yes
The nvgstcapture not work for you may cause the display resolution. Could you tell what your display resolution because it work without “-m 2” for me.

I have the same problem, it has took me half a day. I tried lots of tools just want to take a picture from cli. I still did not get a solution.

opencv just give me the following error:

HIGHGUI ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV

Hi Zhengchun
Could you tell what you were test and the log.

Quick update in case people are interested - i have managed to get a stream set up between a laptop and the jetson using SSH and gstreamer piplines. For anyone who might need this

on the jetston tx2 side of things:


gst-launch-1.0 nvcamerasrc fpsRange=“30 30” intent=3 ! nvvidconv flip-method=6 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false

on your laptop / pc:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false async=false -e

Note - you might hit some errors if you dont use “autovideosink” flag - i was hitting several “ERROR: pipeline could not be constructed” errors due to the pipeline not specifying the right sink - autovideosink will auto-plug an appropriate sink element

Next steps for me are to get this working with openCV and my classifier so will update with details.

Does anyone know how i might get these pipelines to work with OpenCV???

Running below…

import numpy as np
import cv2

cap = cv2.VideoCapture(""“nvcamerasrc fpsRange=“30 30” intent=3 ! nvvidconv flip-method=6 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false”"")

# Capture frame-by-frame
ret, frame = cap.read()

# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame
if cv2.waitKey(1) & 0xFF == ord('q'):

When everything done, release the capture


Now hitting this error ! Tried googling around it but not much luck, im using OpenCV not CV4tegra!

OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /home/ubuntu/opencv/modules/imgproc/src/color.cpp, line 9748
Traceback (most recent call last):
File “capture.py”, line 11, in
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /home/ubuntu/opencv/modules/imgproc/src/color.cpp:9748: error: (-215) scn == 3 || scn == 4 in function cvtColor

That error complains that the input format is not what it should be for the conversion you specify.
For the capture, you specify I420 format, but for the conversion, you specify RGB input.
Or maybe you’re specifying H264 format for the capture? I don’t know exactly how gstreamer does that format pipeline.
If you need monochrome, try specifying monochrome for the capture in the first place, and not do conversion?
Else, you need to convert from your actual format to monochrome, not RGB to monochrome.
Might be worth it to log/print the format of the frame when you get it, to see what it is you’re getting.

ok - understood, not actually sure where that gray line came from, thought i had changed that so will go over again. Will probably be another 2 days before can test changes so will let you know how it goes!

Definitely want it RGB not grey…

Thanks !

Does anyone know why NVIDIA did not include a ready made application that launches a camera viewer from the desktop, or does anyone know of a downloadable application that uses the TX2 camera with a GUI to help you explore all its options? Trying to get the camera going with GStreamer pipes seems overly complicated.

Sorry, I don’t know about such tool. I have seen similar tools from camera vendors. Why NVIDIA didn’t provide this is maybe because they don’t sell cameras.

I’m discovering this thread and it is probably late for you…may it help someone who gets here.
Main problem of the example of post #17 is having a pipeline without appsink.
To get opencv to read the frames from videoCapture (assuming your version of opencv has gstreamer support), your pipeline has to end with appsink (your opencv app) and appsink input should have format BGR or gray8, such as:

VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink");

For watching the camera while encoding/sending, you may use plugin tee to duplicate your stream and have two subsequent pipelines, each starting with plugin queue. One will end with appsrc [EDIT: appsink as above] for opencv visualisation while the other one will encode/send.