How to use v4l2 camera instead of zed camera to run orb demo

Now there is no zed camera, only v4l2 camera. I know ORB can support monocular camera, so I want to know how to modify so that ORB demo can be realized with v4l2 camera. Thanks!

what is your camera ? Could you send me the link to it or at least the output format ?

For example if your camera is MJPEG :

“input_images”: {
“pipeline”: {
“pipeline”: "gst-launch-1.0 -v v4l2src device=/dev/video0 ! avdec_mjpeg ! videoconvert ! video/x-raw,format=RGB,height=480,wight=1280,framerate=30/1 ! appsink name = acquired_image "

Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : USB2.0 Camera: USB2.0 Camera
Bus info : usb-0000:00:14.0-1
Driver version: 5.4.44
Capabilities : 0x84A00001
Video Capture
Metadata Capture
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 640/480
Pixel Format : ‘YUYV’
Field : None
Bytes per Line : 1280
Size Image : 614400
Colorspace : sRGB
Transfer Function : Default (maps to sRGB)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Limited Range)
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 640, Height 480
Default : Left 0, Top 0, Width 640, Height 480
Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 640, Height 480
Selection: crop_bounds, Left 0, Top 0, Width 640, Height 480
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
brightness 0x00980900 (int) : min=-127 max=127 step=1 default=0 value=0
contrast 0x00980901 (int) : min=0 max=127 step=1 default=70 value=70
saturation 0x00980902 (int) : min=0 max=255 step=1 default=65 value=65
hue 0x00980903 (int) : min=-16000 max=16000 step=1 default=0 value=0
white_balance_temperature_auto 0x0098090c (bool) : default=1 value=1
gamma 0x00980910 (int) : min=16 max=500 step=1 default=110 value=110

‘YUYV’ is supported by default V4L2 Isaac code, can you try to run the V4L2 sample located in
and tell me if it work please

so you just have to change the camera source from zed to v4l2

In the orb demo json file change the ZED node to a V4L2 node and same in the edge and config section

Also change ZED to “sensors:v4l2_camera”, in modules section of json and in buid file

Thank you so much, I understand, I will try now.

1 Like

I can run the ORB demo successfully, but I can’t find the file of KeyFrameTrajectory, where can I find it. Thanks! “KeyFrameTrajectory.txt”

1 Like


Nice ! what do you want to do with it ?

Can I get the keyframe track generated by ORB?

looking at the demo you can only get :
"source": "extract_visualize_orb/extract_visualize_orb/coordinates",
"source": "extract_visualize_orb/extract_visualize_orb/features",

what do you whant to do with this ?

I want to get the trajectory diagram of the camera.

Looking at the doc :

 1. Extract feature orientation
2. Extract descriptors

## Types Provided by the Gem

### Keypoints

A  `Keypoint`  has the following properties:

* `x` ,  `y` : Keypoint coordinates in the image it was extracted from. Note: Since there are multiple scale levels, use the utility function  `GetCoordsAtTopLevel`  to get the coordinates at the topmost (initial) image level.
* `response` : “Strength” of a feature. A higher score means a higher local contrast.
* `angle` : Orientation angle (in radians)
* `scale` : Radius of the support region (in pixels)
* `level` : Scale level from which the feature was extracted. Starts at zero.

The  `Keypoints`  type is a vector of keypoints.

### Descriptors

A  `Descriptor`  has the following properties:

* `id` : An ID that allows connecting a descriptor back to the keypoint from which it originated
* `elements` : The binary descriptor data, packed into a small buffer of elements.

The  `Descriptors`  type is a vector of descriptors.

so I don’t think (I maybe wrong) that you can get the trajectory of the camera, if you want to get that you should do a VIO, orb is more about Textured object detection

GetCoordsAtTopLevel and angle could do the job if the environnement is not moving but I think it woul be still a lot less accurate than VIO
maybe vio and 3D Object Pose Estimation with AutoEncoder is more adapted to your project but I don’t have enough details to be sure about that ;)