Cuda eigen discrepancies

first it required to

sudo cp /usr/local/cuda-10.2/targets/aarch64-linux/include/crt/math_functions.hpp /usr/local/cuda-10.2/include/math_functions.hpp

but then it would still fail with error

     function "__half::operator unsigned int() const"
            function "__half::operator long long() const"
            function "__half::operator unsigned long long() const"
            function "__half::operator __nv_bool() const"

14 errors detected in the compilation of "/tmp/tmpxft_00000da1_00000000-6_robot_to_gpu.cpp1.ii".
CMake Error at gpu_voxels_urdf_robot_CUDA_TARGET_generated_robot_to_gpu.cu.o.RelWithDebInfo.cmake:280 (message):
  Error generating file
  /home/agx/gpu-voxels/build/packages/gpu_voxels/src/gpu_voxels/robot/urdf_robot/CMakeFiles/gpu_voxels_urdf_robot_CUDA_TARGET.dir//./gpu_voxels_urdf_robot_CUDA_TARGET_generated_robot_to_gpu.cu.o


packages/gpu_voxels/src/gpu_voxels/robot/urdf_robot/CMakeFiles/gpu_voxels_urdf_robot.dir/build.make:1515: recipe for target 'packages/gpu_voxels/src/gpu_voxels/robot/urdf_robot/CMakeFiles/gpu_voxels_urdf_robot_CUDA_TARGET.dir/gpu_voxels_urdf_robot_CUDA_TARGET_generated_robot_to_gpu.cu.o' failed
make[2]: *** [packages/gpu_voxels/src/gpu_voxels/robot/urdf_robot/CMakeFiles/gpu_voxels_urdf_robot_CUDA_TARGET.dir/gpu_voxels_urdf_robot_CUDA_TARGET_generated_robot_to_gpu.cu.o] Error 1
CMakeFiles/Makefile2:4138: recipe for target 'packages/gpu_voxels/src/gpu_voxels/robot/urdf_robot/CMakeFiles/gpu_voxels_urdf_robot.dir/all' failed
make[1]: *** [packages/gpu_voxels/src/gpu_voxels/robot/urdf_robot/CMakeFiles/gpu_voxels_urdf_robot.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

so minor modifications to the installed from sources eigen did not work
sudo nano /usr/local/include/eigen3/Eigen/Core
How to build this on Jetson? I was able to build it on Desktop dGPU devicem but it won’t get through on Jetson

Hi,

We cannot reproduce the error mentioned above with gpu-voxels.
Would you mind to share your detail steps with us?

Following the instruction in gpu-voxels, we try to make the repository with following command:

$ sudo cp /usr/local/cuda-10.2/targets/aarch64-linux/include/crt/math_functions.hpp /usr/local/cuda-10.2/include/math_functions.hpp
$ git clone https://github.com/fzi-forschungszentrum-informatik/gpu-voxels
$ cd gpu-voxels/
$ cd build/
$ cmake ..
$ make

But nothing is built and it return immediately.
Thanks.

@AastaLLL
Thank you for following up!
Steps to reproduce the error

sudo apt install -y libtinyxml-dev libboost-dev libpcl-dev libglew-dev libglm-dev freeglut3-dev libcppunit-dev doxygen qt4-default
sudo apt-get purge cmake && sudo apt autoremove && wget https://github.com/Kitware/CMake/releases/download/v3.18.4/cmake-3.18.4.tar.gz && tar -zxvf cmake-3.18.4.tar.gz && cd cmake-3.18.4 &&  ./bootstrap && make && sudo make install
wget https://gitlab.com/libeigen/eigen/-/archive/3.3.7/eigen-3.3.7.tar.bz2
tar xvf eigen-3.3.7.tar.bz2
sudo apt remove --purge libeigen3-dev
sudo apt autoremove
cd eigen-3.3.7/
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local
sudo make install
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt update -y
sudo apt install ros-melodic-desktop-full -y
source /opt/ros/melodic/setup.bash
sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential -y
sudo apt install python-rosdep -y
sudo rosdep init
rosdep update
git clone https://github.com/fzi-forschungszentrum-informatik/gpu-voxels
cd gpu-voxels/
cd build/
cmake ..
make

However, it might be more convinient to split the whole thing like that in 2-3 steps:

  1. Installs dependencies
curl -s https://raw.githubusercontent.com/AndreV84/Jetson/master/gpu_voxel.test | sudo bash
  1. Install ROS
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt update -y
sudo apt install ros-melodic-desktop-full -y
source /opt/ros/melodic/setup.bash
sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential -y
sudo apt install python-rosdep -y
sudo rosdep init
rosdep update
  1. attempt to build from the gpu_vozel, as you shown at your post
git clone https://github.com/fzi-forschungszentrum-informatik/gpu-voxels
cd gpu-voxels/
mkdir build
cd build/
cmake ..
make

actually it seems worked after N’s attempt
However, the complication as for now is although make -j8 works, there are still details to be figured out;
does it work at your side?

Hi,

Thanks for the detail steps.
Will update the status in our environment later.

Hi,

Confirmed that the procedure shared here also works in our environment.
Thanks for the sharing.

will this work in a similar manner?


could it be used to vizualize ROS/ intel realsense d435 cameras?

@AastaLLL
steps to reproduce the issue at Jetson

 git clone https://github.com/NVIDIA/gvdb-voxels
 cd gvdb-voxels
mkdir build
cd build
 cmake -DCMAKE_CUDA_COMPILER=/usr/local/cuda-10.2/bin/nvcc -DCMAKE_CUDA_ARCHITECTURES=72 ..
make -j8

this voxel also works


However, our purpose is to get intel realsense d435 ROS topic to get vizualized in either the former or the latter tool or otherwise find a way to vizualize camera pointcloud
The attached image above is from x86_64.
At Xavier AGX it also builds

Hi,

Thanks for the sharing. It’s really cool.
Does realsense camera on Xavier with these two tools works?

Thanks.

Thank you for following up!
We are working to get the d435/435i/d455 working with the gpu_voxel


It is kind of supported through ROS_realsense node.
However, it might need some adjustments. But the maintainers used to confirm it working with d435.

Regarding the gvbd_voxels it doesn’t support ROS/ RGBD cameras, as per the author’s response, but gvbd format.
So unless some conversion from .ply or pcd etc to gvbd happens it won’t work.
@AastaLLL
Moreover, while it runs at Jetson [gpu_voxels] , specifically the vizualizer,
there we might got another complication due to the shared memory design of arm devices probably?
Their vizualizer reads from shared memory, after some another executed binary, e.g. distance_ros sends to there data. If the difference of design of shared memory access is different it might become a problem to read from the shared memory in order to vizualize.
at x86_64 we got it to reflect live stream from realsense through ROS with gpu_voxel, finally

does it look like we couldn’t overcome this error?
it emerges when trying to read with the vizualizer from other process that delivers oupputs to shared memory fro mthere at x98_64 we could read them, but not on Xavier

~/gpu-voxels/export$ ./bin/gpu_voxels_visualizer 
<2020-10-14 12:20:20.291> Visualization(Info)::main: Starting the gpu_voxels Visualizer.
<2020-10-14 12:20:20.291> Visualization(Info) Visualizer::initializeVisualizer: Trying to open the Visualizer shared memory segment created by a process using VisProvider.
<2020-10-14 12:20:20.291> Visualization(Warning) XMLInterpreter::getUnitScale: No unit scale defined. Using 1 cm instead.
<2020-10-14 12:20:20.526> Visualization(Info) Visualizer::initGL: Using GPU Device 0: "Xavier" with compute capability 7.2
<2020-10-14 12:20:20.608> Visualization(Info) Visualizer::initGL: Using OpenGL Version: 4.6.0 NVIDIA 32.4.3
<2020-10-14 12:20:20.612> Visualization(Info)::main: Number of voxel maps that will be drawn: 1
<2020-10-14 12:20:20.612> Visualization(Error)::registerVoxelmapFromSharedMemory: Couldn't find mem_segment voxel_map_handler_dev_pointer_0
<2020-10-14 12:20:20.612> Visualization(Info)::main: Number of voxel lists that will be drawn: 5
<2020-10-14 12:20:20.612> Visualization(Info)::registerVoxellistFromSharedMemory: Providing a voxellist called "myPointcloud"
<2020-10-14 12:20:20.613> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /myPointcloud is zero.
<2020-10-14 12:20:20.615> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /voxellist_0 is zero.
<2020-10-14 12:20:20.616> Visualization(Warning) Visualizer::registerVoxelList: No context found for voxel list myPointcloud. Using the default context.
<2020-10-14 12:20:20.617> Visualization(Info)::registerVoxellistFromSharedMemory: Providing a voxellist called "mySweptVolume"
<2020-10-14 12:20:20.618> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /mySweptVolume is zero.
<2020-10-14 12:20:20.619> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /voxellist_1 is zero.
<2020-10-14 12:20:20.620> Visualization(Warning) Visualizer::registerVoxelList: No context found for voxel list mySweptVolume. Using the default context.
<2020-10-14 12:20:20.620> Visualization(Info)::registerVoxellistFromSharedMemory: Providing a voxellist called "myPointcloud"
<2020-10-14 12:20:20.621> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /myPointcloud is zero.
<2020-10-14 12:20:20.622> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /voxellist_2 is zero.
<2020-10-14 12:20:20.623> Visualization(Warning) Visualizer::registerVoxelList: No context found for voxel list myPointcloud. Using the default context.
<2020-10-14 12:20:20.624> Visualization(Info)::registerVoxellistFromSharedMemory: Providing a voxellist called "countingVoxelList"
<2020-10-14 12:20:20.624> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /countingVoxelList is zero.
<2020-10-14 12:20:20.625> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /voxellist_3 is zero.
<2020-10-14 12:20:20.626> Visualization(Warning) Visualizer::registerVoxelList: No context found for voxel list countingVoxelList. Using the default context.
<2020-10-14 12:20:20.627> Visualization(Info)::registerVoxellistFromSharedMemory: Providing a voxellist called "countingVoxelListFiltered"
<2020-10-14 12:20:20.627> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /countingVoxelListFiltered is zero.
<2020-10-14 12:20:20.629> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /voxellist_4 is zero.
<2020-10-14 12:20:20.630> Visualization(Warning) Visualizer::registerVoxelList: No context found for voxel list countingVoxelListFiltered. Using the default context.
<2020-10-14 12:20:20.631> Visualization(Info)::main: Number of octrees that will be drawn: 2
<2020-10-14 12:20:20.631> Visualization(Info)::registerOctreeFromSharedMemory: Providing an Octree called "octree_0".
<2020-10-14 12:20:20.631> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /octree_0 is zero.
<2020-10-14 12:20:20.632> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /octree_0 is zero.
<2020-10-14 12:20:20.633> Visualization(Warning) Visualizer::registerOctree: No context found for octree octree_0. Using the default context.
<2020-10-14 12:20:20.634> Visualization(Info) Visualizer::registerOctree: The initial super voxel size of the octree could be loaded and will be used. Dimension of super voxel: 64
<2020-10-14 12:20:20.634> Visualization(Info)::registerOctreeFromSharedMemory: Providing an Octree called "octree_1".
<2020-10-14 12:20:20.634> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /octree_1 is zero.
<2020-10-14 12:20:20.635> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /octree_1 is zero.
<2020-10-14 12:20:20.636> Visualization(Warning) Visualizer::registerOctree: No context found for octree octree_1. Using the default context.
<2020-10-14 12:20:20.636> Visualization(Info) Visualizer::registerOctree: The initial super voxel size of the octree could be loaded and will be used. Dimension of super voxel: 64
<2020-10-14 12:20:20.637> Visualization(Info)::main: Number of Primitive Arrays that will be drawn: 1
<2020-10-14 12:20:20.637> Visualization(Info)::registerPrimitiveArrayFromSharedMemory: Providing a Primitive Array called "measurementPoints".
<2020-10-14 12:20:20.637> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /measurementPoints is zero.
<2020-10-14 12:20:20.638> Visualization(Warning) XMLInterpreter::getDataContext: Occupancy_threshold of /primitive_array_0 is zero.
<2020-10-14 12:20:20.639> Visualization(Warning) Visualizer::registerPrimitiveArray: No context found for Primitive Array measurementPoints. Using the default context.
<2020-10-14 12:20:20.775> SharedMemManager(Error) SharedMemoryManagerPrimitiveArrays::getPrimitivePositions: Primitive Arrays Handler could not be opened! Error was 801
<2020-10-14 12:20:20.775> Visualization(Error) Visualizer::drawPrimitivesFromSharedMem: It was not possible to load primitive data from shared memory.
terminate called after throwing an instance of 'thrust::system::system_error'
  what():  parallel_for failed: cudaErrorNotSupported: operation not supported
Aborted (core dumped)

used steps from https://www.gpu-voxels.org/release-1-3-0-merging-and-morphing/
so the vizualizer would try to read from the shared memory from the other process

@AastaLLL:
from gpu voxel maintainers:
"
The Jetson devices are based on an ARM architecture and lack support for the type of shared-memory between processes used by the visualizer. Therefore the gpu_voxels_visualizer will not work on Jetson devices for the foreseeable future.

The alternative is to publish data, e.g. for visualization in RViz, as mentioned in #65. "

Thanks for updating the information with us.

Yes, I am trying to rebuild it again with updated versions.
once the timeschedule becomes little bit more relaxed I will try to get it with the most recent Jetpack/ mediapipe
also with direct CSI

can we build it on Host PC with cuda 11.1

home/nvidia/gpu-voxels/packages/gpu_voxels/src/gpu_voxels/octree/PointCloud.cu:22:10: fatal error: thrust/system/cuda/detail/cub/device/device_radix_sort.cuh: No such file or directory
 #include <thrust/system/cuda/detail/cub/device/device_radix_sort.cuh>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

I can see the file is there:

/usr/local/cuda-11.1/targets/x86_64-linux/include/cub/device/device_radix_sort.cuh