API/ Library for re-orienting 360 degree camera frame efficiently on TX1

I am trying to correct the orientation of a 360 degree camera frame using roll, pitch and yaw estimates coming from my C++ Extended Kalman Filter (EKF) code. I am interested in using libraries and packages such as OpenCV, Nvidia Visionworks or OpenVR. Due to the requirements of the project, I am looking to get it done on Jetson TX1. I am working on L4T R28.2.

To give more context, I have a camera that provides 360 degree images which also has a set of IMUs integrated to it. There is an initial frame of reference and EKF calculates roll, pitch and yaw based on this frame of reference. As the camera changes the orientation, I want to use the EKF outputs to correct the orientation of 360 degree video frames coming from the camera.

  1. So far I haven’t seen any APIs or libraries providing this kind of a function for 360 degree cameras. I am looking for suggestions that are suitable for Jetson TX1.

  2. If this is not readily available, building a function from scratch using CUDA programming a good approach? (I am new to CUDA programming and GPGPU programming; suggestions are welcome).

hello skr_robo,

would like to understand more details of your use-case.
for example,
what’s input resolution of the 360 degree camera, and what’s the output result you expect.
how’s the sample rate of the IMUs, are you able to sync them to each input frames.

Hello JerryChang,

The resolution is 1280 * 720. The IMUs provide an estimate in intervals of 3ms to 5ms. The frame rate is 25 fps. I can sync IMU readings with each input frame, although I haven’t yet written that part of the code. To give you more context, I have explained the problem further:

The camera is mounted on a mobile platform which moves, thus changing the orientation (roll, pitch and yaw) of the camera. The initial orientation of the camera is used to fix a global coordinate system. The change in the orientation of the camera, as the platform moves, is calculated by estimating roll, pitch and yaw with reference to this global coordinate system. As the orientation of the camera changes, the orientation of the scene in a frame also changes. The idea is to align the frame back to global coordinate system so that all scenes are in alignment.

As an example, there is an idea to fix a global coordinate system where (x, y, z) will indicate the position of each pixel of the 360 degree video. It can be imagined that the pixels are lying on the surface of the sphere and the origin of the global coordinate system is at the center of this sphere. As the camera changes the orientation, this sphere rotates. Thus a pixel which corresponds to a particular object ‘X’ will change its coordinates after rotation. Another way of looking at it, is that the local coordinate system has deviated from global coordinate system by roll, pitch and yaw. The objective is to create a rotation matrix from roll, pitch and yaw values estimated from EKF algorithm to rotate this sphere back to original position and bring it back in alignment with the global coordinate system.

I am looking for a function which is capable of doing this using GPU.

hello skr_robo,

you should have feature implementation with GPU.
to have better experience, you should make sure the IMU sample data and camera frame timestamp synchronized.
thanks