I was able to run TAO Pipeline l4t on JetPack 4.5.
But unfortunately I had a problem using the camera (Raspberry Pi 2.1, IMX219) in the examples of determining the gaze and emotions.
Opening in BLOCKING MODE
Available Sensor modes :
Resolution: 3264 x 2464 ; Framerate = 21.000000; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 3264 x 1848 ; Framerate = 28.000001; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 1920 x 1080 ; Framerate = 29.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 1640 x 1232 ; Framerate = 29.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 1280 x 720 ; Framerate = 59.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
DEFAULT no IOCTL called
Opening in BLOCKING MODE
(NvCameraUtils) Error InvalidState: Mutex already initialized (in Mutex.cpp, function initialize(), line 41)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function open(), line 54)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 258)
(Argus) Error InvalidState: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 102)
ArgusV4L2_Open failed: Invalid argument
Opening in BLOCKING MODE
Unsupported buffer type
VIDEOIO ERROR: V4L: device /dev/video0: Unable to query number of channels
[Jarvis] [E] Could not open video source /dev/video0
demo_emotion_output.txt (12.9 KB)
demo.conf:
# Path to device handle; ensure config.sh can see this handle
video_path=/dev/video0
fps=59
# Boolean indicating whether handle is pre-recorded
is_video_path_file=false
# Desired resolution (width, height, channels)
resolution_whc=1280,720,3
# Show visualization
visualization=true
# Leverage API to send single decoded frames to Pipeline
use_decoded_image_api=false
On the host and in the container, the camera works great:
gst-launch-1.0 nvarguscamerasrc sensor_mode=0 ! ‘video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12’ ! nvvidconv flip-method=0 ! ‘video/x-raw,width=960, height=616’ ! nvvidconv ! nvegltransform ! nveglglessink -e
Perhaps I am doing something wrong?