Can anybody here answer or guide me to the answers to the following questions about Nvidia Drive simulation? The software. Not hardware.
I have googled, watched Nvidia videos, searched the dev portal for related documentation, wrote to Nvidia via online question forms, but no luck.
I am looking for answers which I can refer to in writing (part of my thesis survey of autonomous vehicles’ simulations for machine learning).
The questions about Nvidia Drive simulation (the software):
1.) What kind of engine is it using? (e.g. Unreal Engine or something else)
2.) What kind of driving environments are provided in the sim out of the box? (e.g. only highway like visible in videos or something more like a city, one town, several towns, etc.)
3.) Are any actor assets (i.e. humans and other vehicles) provided in the simulation out of the box, besides the ego-vehicle?
4.) What sensors are supported in the simulation out of the box? Especially from this list:
- Depth sensor/camera
- Thermal infrared sensor/camera
- Lidar and radar sensors exist (based on found info but please correct me if wrong)
5.) What output training labels are provided for machine learning? Especially from those:
- Semantic segmentation (visible from videos but please confirm if it’s out of the box or some coding needed for this)
- 2D bounding boxes for all dynamic objects (same as for semantic segmentation, please confirm)
- 3D bounding boxes for all dynamic objects (not seen in videos)
6.) Is it correct that no off road driving environments are provided?
7.) Is the simulation open source and available for free? (it is referred to as “open platform for partners” but not clear who can/cannot be the partners)
8.) Can I (with my university account) download, install and use the simulation software without any specific Drive Constellation hardware? (I have understood that no, but please kindly confirm.)
Thank you for your help.