Hi,
Thank you for your work on Issac Gym. It’s impressive and excellent.
I want to ask questions about point clouds. I have noticed some APIs that are helpful to get point cloud, but can you explain more detailed steps? Are there any relevant examples?
In addition, how to render and view the point cloud in the simulation environment after obtaining it.
We have an internal example of converting depth cam data to point clouds. Would this be useful? It is quite an old example and doesn’t leverage the new tensor API or full GPU acceleration, which could be a big benefit. I can still dig it up for you if you think it would be useful.
Hello is there a way , we can use inbuilt point cloud provided in ZED 2 SDK also spatial mapping module , which comes with ZED 2 , can a custom codelet be made in C++ or Python ? are the message API in Isaac SDk sufficient to display to Nvidia WebSight . We are trying to display the spatial Imaging data from Zed 2 to Isaac SDK , using a custom C++ Codelet .
Note that this is for Isaac Gym only, which is not yet integrated with Isaac Sim or Isaac SDK - that will happen some time next year. Isaac Gym is intended as a preview of APIs for end to end GPU reinforcement learning.
If your Gym examples worked before you altered your conda environment with the libz symlink needed for pptk, you may want to delete and recreate your conda environment. Sounds like something else went wrong.
For troubleshooting in general, make sure that you have a working vulkan environment by running vulkaninfo - that’s often the most common cause for unusual issues.
@gstate
Thank you.
Running the vulkaninfo revealed broken GPU driver, thus the cause got determined.
reinstalling the cuda seems resolved the issue.
the depth pointcloud example works by now.
Thank you very much