Abnormal generated point cloud results

I’ve been using TAO toolkit to train a pointpillarnet for 3D object detection and it takes point cloud(saved in .bin extension) data and label data as its input for training. Currently I only downloaded the KITTI dataset as mentioned in the following link:

In order to add more data, I also have been using omniverse replicator.

I followed the example in the documentation and wrote a script to try it out.
https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html?highlight=open3d#point-cloud

I loaded a 3D asset of a car and what I intended to do is to save the point cloud data of the car from different orientations, respectively. That means if I translate or rotate the said car asset n times, n .ply files should be saved.

Screenshot from 2023-12-07 16-49-29

I did manage to save multiple point cloud results but then I tried to visualize them using open3d and they looked very abnormal.
Here are some of the examples:



Some parts of the car that face towards the camera look “SCATTERED” and the color of the first generated point cloud data always appears differently from the rest.

The scripte I wrote is as followed:

import omni.replicator.core as rep
import asyncio
import os
import numpy as np
import omni.kit.pipapi
omni.kit.pipapi.install("open3d")
import open3d as o3d

async def test_pointcloud():
    CAR = '/home/averai/Jefferson/NVIDIA_Omniverse/car/Pony_Cartoon.usdz'
    car = rep.create.from_usd(CAR, semantics=[('class', 'car')])

    camera = rep.create.camera()
    render_product = rep.create.render_product(camera, (1024, 512))

    pointcloud_anno = rep.annotators.get("pointcloud")
    pointcloud_anno.attach(render_product)

    with camera:
        camera_positions = [(500, 450, 600)]
        rep.modify.pose(position=rep.distribution.sequence(camera_positions), look_at=car)
        
    ply_out_dir = '/home/ai/NVIDIA_Omniverse/point_cloud'
    os.makedirs(ply_out_dir, exist_ok=True)
    
    num_frames = 15
    for i in range(num_frames):
        await rep.orchestrator.step_async()
        points = []
        points_rgb = []
        #rep.randomizer.car()

        with car:
            rep.modify.pose(
                position=rep.distribution.uniform((-800, 0, -800), (-500, 0, 500)),
                rotation=rep.distribution.uniform((0, -150, 0), (0, 150, 0)))

        with camera:
            rep.modify.pose(position=rep.distribution.sequence(camera_positions), look_at=car)
            #rep.modify.pose(
             #   position=rep.distribution.uniform((-500, 0, -800), (500, 0, 2000)), look_at=car)

        pc_data = pointcloud_anno.get_data()

        points.append(pc_data["data"])
        points_rgb.append(pc_data["info"]["pointRgb"].reshape(-1, 4)[:, :3])

        pc_data = np.concatenate(points)
        pc_rgb = np.concatenate(points_rgb)

        pcd = o3d.geometry.PointCloud()
        pcd.points = o3d.utility.Vector3dVector(pc_data)
        pcd.colors = o3d.utility.Vector3dVector(pc_rgb  / 255.0)
        ply_file = "pointcloud_new" + str(i) + ".ply"
        o3d.io.write_point_cloud(os.path.join(ply_out_dir, ply_file), pcd)

asyncio.ensure_future(test_pointcloud())

By the way, in the code snippet I included “omni.kit.pipapi.install(“open3d”)”, which isn’t mentioned in the example in the documentation. I had to add thie line so that open3d could be found and imported.

Hi @silentjcr , would you able to share the USD file with me so that I can take a look?
And is the yellow point cloud the first point cloud you generated?