I have Isaac Sim running via Omniverse. Just trained object detection using the given docker. OOPS - the RTX3070 isn’t supported on the tensorRT7.0 in the docker (tltexport). Forum claims I also need to upgrade CUDA to 11.2.
Any recommendations to run both Isaac SIM as well as the TLT toolkit on a desktop with an RTX3070?
Like what are the real versions that Isaac SIM will work with - the recommended ones (version_checker.py) are too low for TLT.
For your use case, I would only use 460.67 and install properly docker for TLT. Now, you should not need to worry about cuda, or TensorRT version, since those will be handled by docker in the background. This webinar walks you through TLT and how to deploy on jetson: Webinar: Create Gesture-Based Interactions with a Robot | NVIDIA Developer Blog
One thing to be cautious when moving to jetson is the versions of Jetpack, that will determine your TRT version, which in the case of jetson matters unlike in the workstation. The thing you should make sure is that the tlt-converter version matches your jetpack version.
I guess I’m more disappointed with the range of documentation and examples. Some don’t work, some are partially out’o’date (like the object detection docker via Isaac sim - which dies on RTX3070 due to too old versions), to the conflicting versions ‘required’ for sim/sdk and TLT and perhaps what the forums have discovered.
I get this from a github. But NVidia has more resources and reason to do a good job in this area than anyone!
btw, I still see a looming challenge. Jetpack 4.5 (and the rest) use tensorRT 7.1.3. This version doesn’t support the desktop RTX3070 card.
So I guess the key issue to me is resolving the tltconverter - can it handle the different version. I’m not there yet, may be peppering the forum soon!
Hello,
Don’t worry about the TRT version. Just get to an etlt model and then make sure to have the proper tlt-converter (exact version match to the jetpack) and the converter will worry about the right TRT, not you.
Okay - thanks! But on the website, it sure purports to be active and usable. And more easily found than the one a previous person now has thankfully pointed me to.
And I really want to do 3D Object Pose prediction. That would seem to be an older Unity thing - and TLT doesn’t have a model in that space. Ughh. I’m going to have to get something working in that space.
Thanks again for the response! And I really wish NVidia would clean up there training/educational resources for num-nums like me…
While clearly for Unity Sim, this document has at least been updated for Omniverse - a little. In the text - around 5 paragraphs down, it says “However, the training application is agnostic to the simulation platform used to generate the data and can be replaced with any other simulation engines like Isaac Sim based on NVIDIA’s Omniverse Kit.”.
Should I just try this and flail? Again - noting that the embedded versions are probably too old for the RTX3070 GPUs? Or is there an update/workaround?
(Perhaps I should make this a top-level forum topic, eh?)
Sorry for the confusion.
Here is the summary, it might clear stuff a bit for you.
You can use Isaac SDK with Isaac Sim Unity or Omniverse Isaac Sim
Unity Isaac Sim is the older version, and although it is still supported in the recent Isaac SDK release, but this is the last release that support Unity Isaac Sim. From next release is going to be only Omniverse Isaac Sim
Please go here and download Omniverse launcher and from there install and launch Isaac Sim . I am sure you will love it
RTX 3070 is an RTX GPU therefore perfect for Omniverse Isaac Sim
Unfortunately there is no docker version of Omniverse Isaac Sim for object detection
As pointed in the Isaac SDK doc which you also mentioned , you can do object detection and pose estimation in Omniverse IsaacSim. please try it out and ask in the forum if you hit any issue
This version requires the Isaac unity sim3d version, prior to the omniverse version. e.g., the first command to execute is…
sktop:~/isaac_sim_unity3d$ ./builds/factory_of_the_future.x86_64 --scene Factory01 --scenario 8
Any help available on the learning resources? Will they be updated? Or are we on our own here?
I was very successful in creating simulated data (non-programmatic way) and then using TLT to train object detection. Happy with that - see image below!
Creating simulated data the programmatic way is the current task at hand. a) documentation is wrong, e.g., debugging code in vs code. b) still have to figure out how to save pose - I’ll probably do that myself and not use the built-in visualization tools - or some combination. Still open as to how to do the ML pose estimation - I’ll try what they have (I think there are 3 potential approaches on the NVidia website - there is no TLT solution), but I feel I may have to do my own ML, based on something like PoseCNN or whatever.
So have you used the synthetic data generation for your data collection and used TLT to train your model?
how do you test the trained model in the warehouse scene?
Also I have a GPU of RTX 2070. That wouldn’t work for TLT ?
Yes to first question. I think your second question is whether this will work in the real world. The assumption of course is yes - that’s why I’m doing this, but proving will require more work.
RTX2070. Got me, it probably will work. TLT uses a docker setup w/ the right versions. Make sure you get the most recent one - the previous one died on the RTX class of GPUs. You can always use Keras/whatever and not use TLT. Then you can train on any GPU, even non-GPU.
Thank you for your response.
For my second question I wanted to know if you modified any scripts or did you make your own object detection code and used that with the robot engine bridge to get the box around the detected object?