Questions regarding the generated data

Hi.

I’ve been working on training a 3D object detection model, which is pointpillarnet, on TAO toolkit.
The model takes the point cloud data and KITTI-formatted annotations as the training input…

Due to the lack of 3D data, I’ve so far only downloaded the KITTI dataset containing 2D images, point cloud data saved in .bin extension, KITTI-formatted annotation files.

I’ve also been trying out Omniverse replicators in order to generate synthetic data containing people and vehicles that are randomly placed in a given scene.

My questions are roughly as follows:

  1. Is it possible to generate point cloud data by using omniverse replicator?

  2. According to the PYTHON API, it seems that currently KITTI Writers don’t support 3D bbox annotations. Is there any workaround for it or will it be implemented in a later version?

https://docs.omniverse.nvidia.com/py/replicator/1.10.10/source/extensions/omni.replicator.core/docs/API.html#

Hi @silentjcr

The point cloud annotator and all the details (with examples) can be found in the docs here:
Annotators Information — Omniverse Extensions latest documentation (nvidia.com)

Regarding the KITTI writer, it does look like its not implemented yet. I’ve submitted a ticket, and when its available, I’ll circle back.

Hi @pcallender , thanks for the reply.
Actually I later found the doc you mentioned after making this topic.
I watched the document before making this thread. I tried to follow the 2 examples and wrote my own script in order to generate point cloud data, but I guess there’s still much to go and I had to MENTION once again that I couldn’t DIRECTLY use open3d as shown in the second example of the point cloud annotator. I had to use omni.kit.pipapi to do this in 2022.3.3 CODE Release yet this won’t work in 2023.1.1 Beta and there I can’t use open3d at all as it couldn’t be found or even installed…