FEEDBACK Request from Isaac Sim Users

Thank you all the members who are using Isaac Sim product. I am part of Isaac Sim team and would like to hear following from all of you:

  1. What are the challenges using Isaac Sim
  2. What can be improved for Isaac Sim
  • Many companies use ROS/2 in their robots which requires to have an up-to-date URDF. URDF Importer needs a lot of improvement, capability of resolving package paths, and a way of defining sensors from there so users don’t have two source of truth to define the robot (USD + URDF) but instead a URDF that can import on lunch every time they run the simulation.
  • ROS integration: Users cannot use custom messages without going through a nightmare of building in a different environment, copy into Isaac’s installation folder and pray for compatibility.
  • Water simulation / underwater robots cannot be simulated without starting from the ground up from PhysX API
  • Memory leaks, that’s a fact and I don’t think requires further explanation
  • ROS OmniGraph sensors: OmniGraph ticks either manually or during rendering. If you want to skip scripting and do it Isaac’s way with the OmniGraph, you’ll find weird stuff as IMUs running on rendering. OmniGraph needs a way to decouple that and also specifically for the sensors, a way of setting update/publishing rate.
  • Actors don’t have physics following them, which makes them useless for obstacle avoidance training. Yes, you can use an RTX Lidar to see them, but stuff like bumpers won’t collide with them.
  • If you create a camera sequence (to create a video) with something in the scene using physics, the physics run at a different rate/way depending on the rendering you’re using (RTX/PathTracing), meaning that if you want to create the sequence smoothly using RTX and then record using PathTracing, you’re not going to have the expected results. If rendering goes slower, physics should wait for it up to the user desired rate. This is something softwares like Gazebo/Ignition/Webots ensure and the users get benefit of it, and others like Unity do NOT provide such a level of sync, making the developers run away from that “simulator” as soon as they face these kind of issues.
  • Not being able to source ROS before launching the simulator takes the developers to a ton of workarounds to have access to both, our libraries and Isaac Sim’s API from the same controller.
  • Breaking API of libraries that the user don’t have access to the code (as the one to generate occupancy maps). Please, whenever a really good tool/extension is released to use with the GUI, add a simple API for users to integrate into their pipelines. No real project has a person bringing up the simulator, manually opening the USDs, and running the extensions. Developers want to automatize stuff, I’ve had to get into extensions code to see how the stuff is done and try to hijack it.
  • Replicator has many problems:
    • There’s no easy nor transparent way of naming a node that’s going to be generated by Replicator and connect it with another. If the user needs a randomization to happen based on another item, it’s really hard to correlate randomization.
    • Replicator is REALLY slow, even if you degrade the quality of the output and write simply rgb images, using Isaac Sim makes the user generate images at least 10x slower than in other platforms.
    • Applying physics before capturing data with Replicator is always custom code, since triggers as on_frame work really bad and take one capture per frame making everything slower and heavier.

Hi Nvidia team

There may be a better way of doing this, but it would be nice to extract images using the Python API without using the viewport as seen in this code Viewport image capture example in code

here is a link to the problem:

Thank you


Hey there! As you said, might be better to create a different thread for that, tho check here the Camera API.

If you need extra help, feel free to create a topic and ping me there.

1 Like

The main challenge for me, as someone has doesn’t have experience with games or simulations, is that there’s a ton of assumed knowledge required to do anything through code. Isaac Sim seems to be a thin layer on top of PhysX, Kit, and USD. And without knowing all of those it’s hard to get anything done. Ideally the wrappers that hide that stuff would be “thickened” so that everything can be done through the wrappers and no knowledge is required of the layers beneath it. All demos/examples/documentation should then use only the highest-level wrapper, so that users get started at the easiest level of abstraction.


as a non-engineer, my time and knowledge with Isaac Sim is limited, but i would suggest buttoning up some of the links in the doc as i think most people go through it to learn how to use the program. i have previously established a google sheet in which to attempt pointing out all the broken links/bad redirects; it may be worth coordinating with the doc team (and i am sure the new users will have a smoother learning experience):

i understand the doc is a living and breathing thing, so there may be some that i may have missed between version updates, but i will try checking in once in a while to the best of my ability to stay on top of it.


Thanks for asking for feedback!
I use Omniverse Isaac sim mainly because it allows massively parallel learning building on top of IsaacGym preview which is not easily possible in Gazebo or Webots.

It is great that you are using Python as the main development language.

Challenges using isaac gym

  1. The environment for developers creating multiple versions of their creations is really limited. I ended up creating my open method of frozen zips that I had to use to switch between linked versions of usd files, python files. Also, there is no easy way of packaging and sharing projects. Take a look at my solution GitHub - sujitvasanth/OmniIsaacGymEnvs_freezethaw: helper script to easily load/save different reinforcement learning setups for omniisaacgymenvs of matching usd's,, articulations/' PPO.yaml, task.yaml.

  2. recording videos during training even with the plugins is awkward and doesn’t allow mp4/mkv video formats. I ended up using OBS to record the videos but had to write workarounds for pauses during the rl tasks GitHub - sujitvasanth/VideoFrozenFramesRemover: remove frozen portions from mp4 or mkv files . Surely NVIDIA should have a good solution for this as you already have your own GPU based encoders!

  3. You should have a much better way of switching betwen iterations of a project and comparing them on the fly - again I ended up writing my own solution which visualizes videos, allows documenting and commenting, and comparing versions of code so you can see how various iterations went, GitHub - sujitvasanth/CodeChronicle: Version, Archive, Document, Visualize, Compar your code

  4. It is not clear how to ensure code is end-to-end GPU-based when working with tensors there should be some explanation of how to do this.

  5. there is very little support for upgrading code from Isaac gym preview to Isaac sim… there should be detailed guidance on the changes necessary. also, the urdf to usd conversion pays little attention to joint movement characetristic conversions such as stiffness, forces etc… which are essential to model simulation - not everyone uses Kuka’s! Some of us build our own custom robots.

What could be improved?

  1. The documentation is still massively lacking compared to isaacgym preview - it has been a long time now and the documentation is still slow,
  2. documentation is still lacking on many of the basic features such as working in detail with articulation and rigid body views, working to locate the camera, ways to create dynamic controls during training, etc
  3. explanation in more detail of how to programmatically create and alter USD’s of robots on the fly
  4. the number of files used in the reinforcement learning tasks in Omniverse Isaac gym are often unnecessary and overly complicates the task of ge
    tting custom tasks up and running
  5. the “helper” functions that automatically populate some of the tasks in Isaac Sim such as automatically inserting a ground plane actually only complicate the issue when the user wants to later add their own custom plane or gravity, or generate an overly complicated view that wastes resources, etc, please don’t do this! or at least document it well enough so the user can easily customize it.
  6. sensors such as IMUs should be able to be tensorised - otherwise, how can we use them for rl which will be the main useful function?
  7. GPU resources are not automatically optimized to maximize speed or resources for learning tasks
  8. there should be more concrete libraries for SIM to real so that once a model has been developed it can easily deployed in C or Python on a Jetson Nano etc…

best wishes and thanks for a great product. Omniverse Isaac Gym after a lot of hard work on my project works faster and can do more rl than Isaac GymPreview 3. But I had to upgrade from my GTX GPU to a RTX3090 but perhaps that was the point!

There is no need for an RTX chipset to do any of the functionality of Isaac SIM so why the insistance on RTX GPU to run Omniverse? I honestly never needed ray tracing to visualise my rl results, why not upport GTX GPU’s like isaac gym Preview 3 did?

best wishes
Dr Sujit Vasanth



I Appreciate that the team is looking for feedback.

Isaac sim as a system should be far more integrated in the sense it should be drag and drop and move towards a no code / action graph setup to increase accessibility, along with smoother integration between gym for speedy NN development.

It is currently developed for an incredibly small user base. Robotics professionals in the academic/computing science fields. We shouldn’t need a PHD in computer science to use this. It’s not developed for industry where the adoption would benefit the wider industrial/process/automation world and Nvidia would benefit by osmosis. Look at how Sketup has taken 3D architectural design and put it in the hands of the lamen. Really Isaac is a Centralised integration and simulation Environment

The premise that it is a robotics platform is IMO incorrect from the outset, this isnt unique to Isaac Sim as omniverse has a bit of an identity crisis. What is it and what is it for? One of the most impressive use cases was the “moment factory” presentation, nothing to do with robotics…

In real world control systems work, the industries that can benefit are far wider than I think the scope of Isaac Sim is atm. It should focus on automation integration as previously mentioned in the forums. Having live feed back from external sources such as PLC’s / SCADA / DCS’s would be a massive win for the platform. Pappachuck made an excellent point in regards to this having an OpenUSD for Isaac Sim “We need something like Alliance for Open USD, but Alliance for Isaac Sim” teaming up with major component manufacturers maybe via RScomponents or something would be the future IMO. Sim ready assets but from manufacturers CADs with basic functionality its a win, win, much like the configs for RTX lidar. That is the future right there, so more partnerships please!

A good place to start would be integrating siemens, allen and bradley and mistubishi PLCs/DCSs. This is what real world industries use and the back end of the physical ”robotics” world actually run on when you get to the coal face. Its basically C for industrial processes. Due to the nature of industry the majority of these systems will not be upgraded. These machines have 30 year+ life cycles so planning for new systems is pointless as by the time these new systems come online IS will be outdated. If Nvidia wants to take Isaac sim seriously they need to realise that the vast majority of use cases will be integration with older technology. Basically IS will be an AI plugin for older “robotic” systems. Which is exactly what it should be doing!

Using linux again is antiquated and only for very special use case so features stuck behind a linux tech wall are a major blow to adoption. RTX lidar and ROS for example

  • Windows: Approximately 80%

  • macOS: Approximately 17%

  • Linux: Approximately 3%

The entire system should focus on ease, the reason I want to stress this is. The people that could really use this system to develop the much promised world of the fourth industrial revolution are not programmers. They are engineers who do not have time to spend years learning computer science, python, tensor math, API’s and ever changing version control etc…

Think of it like a drill, you don’t need to know about battery chemistry or how brushless motors work. Its just a tool, pick it up and press the button it works… IS need to be a tool not a mystical spell book. I’m not trying to make this sound simple its not, but there needs to be some kind of change in the approach to this or its always going to be some sort of tech expo demo and not a proper tool for the Ai revolution that is happening in the digital world and not the physical. “We wanted flying cars; instead, we got 140 characters.” Peter Thiel

Can we get the following

A measuring tape like sketch up? Why dont we have a tape measure? It’s the most basic tool for any engineering project and we dont have one…

We should be able to right click and get the full docs for any node rather than vague descriptions.

Can we make nodes expandable? So we bring in a node that does everything and if we want to fiddle we can explode the node into its subsystems, the vast majority of times we just want something simple that just works without all the faff.


A basic data write node, with a write to path, update rate and a dataframe/SQLs that we can use to train NNs. Nvidia should also look at developing a system that integrates a completed NN into a subsystem external of the sim model. like a run package that just has the NN I/O packaged with the NN as a complete package without deving external software. it just exports the full stack… it would probably be a lot easier than that sounds. It would make Ai applications simple… just plugin your real world I/O’s to the trained NN and it just works the same as the completed high accuracy trained sim data, if you see where I’m going?

Live graphs/plots/logging for real time feedback would be cool

I could go on here for hours but I dont want to murder this. Dan.sandberg is on point here, just make it simpler. It shouldnt be this hard. How many people wont return to IS or OV because it’s just too fiddly and code intensive? Again I can build a house in sketchup with a 5 minute video tutorial, it needs to move in that direction.

Not saying this is going to be easy but Nvidia needs to understand that IS as a platform needs Major resource investment so that it can live up to its full potential to create the 4th industrial revolution we were promised. This is the future of real world Ai applications via the SDG/Gen Ai/sim ready systems. The ROI will be massive in the long run.

Thanks for getting thru that. Really appreciate the teams work, cheers guys I know this isnt easy!

Some food for thought look how difficult it is for the expert devs using the platform on the live streams and think how hard it is for anyone without a computing science degree and an extensive knowledge of python to complete anything in IS. That is the main issue…

Thanks again


PS can HR give Mati a holiday?


Isaac Sim crashes about 10 times a day for me. Sometimes when importing, sometimes when changing attributes in the stage, for various other reasons as well. Python code which wraps native code will crash Isaac Sim with a segfault if a null value is passed in (say because get_object didn’t find the prim path named and then the null prim was passed to ArticulationAPI.Apply) The software feels very alpha quality because of this.


Thanks so much for asking for feedback! Really happy to see the Isaac Sim team reach out to the community for this. Isaac Sim is an amazing platform and has incredible potential – but does need a lot of work yet to make it stable and intuitive.

Thoughts for improvement

  • Learning content, trainings, books. Anything to learn the system in a better way. Right now, many of us are learning by trial and error. The community really needs training support. With this, I think there would be much faster and broader adoption of Isaac Sim. Right now it’s really frustrating on days where I sit and re-watch the same handful of youtube videos and re-read documentation hoping I will get something new out of it. For the tutorials that are created, I feel the authors are the same people who designed the feature, and they don’t realize how much knowledge they already have. Maybe ask someone from a different department to attempt to work through any new tutorial and see where they get stuck. Then improve that section until you can have someone new work straight through it.
  • Double down on Python Scripting Components. To me this is a much cleaner way of handling code tied to objects. Unity really did a nice job with this concept and I would love to see Isaac Sim follow this model for programming assets and robot arms.
  • URDF doesn’t support closed loop kinematic systems, which means robot arms such as Delta Robots cannot be imported.
  • While adding joints in Isaac Sim, it is very difficult to position the joints in exact positions if mechanism arms are not squared up to the x, y, z axis. It would be wonderful to see new tools or revised workflow to position joints.
  • Could Isaac Sim just be rolled into USD Composer instead of having an independent application.
  • Fix incongruent scaling of assets like was done in USD Composer. In composer, many assets auto scale to the correct units.
  • Navigation speed (camera speed) is still an issue. After you get to 0.01 you can’t go slower.
  • Conveyor tool — this was a great addition, but it really does need to be re-worked. First off, the tool essentially gives you a handful of pre-defined conveyor segments that you can drop in. In the real world, conveyor systems are not made up of these small pre-defined segments. For instance, in the real world you may have a 90 degree section that transition into a long straight section. You can’t make that with the current blocks. Here’s what I would suggest: Instead of a library of pre-defined assets, there should be only a few segments which can be stretched in length or width. Conveyor geometry can be determined according to equations. For example, with intralox conveyors, the minimum radius must be 2.2x the width of the conveyor and there must be 1.5x width of straight section going into our out of the curve. It would be wonderful to see a conveyor that you can stretch and have the related geometry adjust with it. Another thing, just get rid of the conveyor frame too. The users can import their own assets and have the conveyor simply overlay on that asset. That eliminates a lot of work for the Isaac Sim team and makes it more universal for users.
  • Detect item on conveyor function – It would be nice to have a function that returns a list of objects that are in contact with a conveyor belt. This is functionality I think nearly everyone working with conveyors needs. Ashley had a few episodes in her community stream attempting to do something similar by detecting the number of boxes that fall onto a pallet. It took her a couple episodes before figuring it out. I think this is such a common function in automation and robotics that a class or function should be written to better handle this. Or attach the script directly to a conveyor component. Here’s reference to one of Ashley’s episodes which I mentioned:
  • Improve replicator — from my understanding, there once were widgets where replicator could configure attributes to be randomized. I believe those widgets were lost when Replicator moved to USD composer. Anyway, it would be great to bring those menus/widgets back.
  • 3D scatter for replicator — Adding more functionality (intelligence) to 3D scatter would be helpful. Often need to randomize the location and orientation of assets in a confined space without them overlapping. Sometimes the position of those assets are dependent on a previously located asset. So it would be interesting to see if objects could be located serially using the remaining space.
  • Replicator writer — would be nice to see the replicator have some built in functionality to output the most popular formats (i.e Coco) without needing to create a custom writer. Maybe this exists now and I’m simply not aware of it.

Fully agree with the comments about ease of use. Ease of use is so incredibly important. Right now, everything is more complicated that it needs to be.


Just remembered an issue: Making a movie is far too difficult. I think the extension to create videos creates separate images that needs to be stitched together manually to create a video. I end up just using my phone camera to take a video of the screen, which is a lame way to do it. I guess Ubuntu lets you capture an area of the screen as a video, so I guess there’s that.


Overall Isaac Sim is incredibly powerful tool, and has many features that makes it significantly faster to iterate with vs other realtime engines (Unity / Unreal). I would say the ability to quickly iterate with python, and ability to combine it with modern ML libraries (PyTorch) is its biggest value moving away from Unity for synthetic data generation and robot simulation.


  • Documentation is kinda all over the place now, especially when it comes to RL. What is Issac Orbit vs Isaac Gym vs Omniverse Isaac Gym? Which one we are supposed to use? Documentation for the Jupyter workflow is outdated, and Jupyter live sessions are broken with 2022.2.1.
  • Figuring out if something is documented as part of Omniverse documentation, vs Isaac Sim documentation can be tricky.
  • There is no way to do RL from image observations currently, as the rendering engine comes to a halt when having more than 10 cameras. This defies the main advantage of using Isaac in the first place: realistic rendering with raytracing for robots to learn proper representations. Without the ability to collect image observations at scale, then what is the point of all those fancy rendering tricks?
  • Some critical bugs remain unfixed for long time. E.g Isaac Sim 2022.2.1 is unusable on Linux with GPU driver 535, and some laptops only support this driver version. This issue has been since the start of the year without any hotfix for such critical issue.

Would be really cool if we can get a GPT 4 finetuned on the latest version of the documentation and the Isaac code base, now that finetuning API is available. This would significantly improve the developer onboarding flow IMO.


I appreciate the NVIDIA team, who provide a fancy simulator.

However, I want to ask about some features in comparison with Pybullet.

  • VR device support in Linux
  • More deformable object support: more FEM & particle-based objects with various property, plastic deformation (mud-like), deformable object contact force, etc.
  • … and easy-to-replace differentiable engine:
    • FEM differentiable.
    • Fluid/Liquid differentiable.



Support of particle sampler and simulation of bulk material, not only clothes.
Simulation of particle system on GPU using non GUI approach are very slow and not usable

  1. Fix the memory leaks. This is the most important. Consider that we may need to open and close the simulationapp for thousands of times for some of our cases.
  2. Dynamic change of the USD and URDF file and reloading of the changes without restarting of the simulation app.
  3. More example programs of dynamic change of the USD or URDF files.

This is a awesome idea!

I agree with many of your points, It would be great to have the whole system be more streamlined of ML

1 Like