I am trying to build a YCB format dataset for one of my projects using the Offline_Pose_Generator.py standalone example in Omniverse.
I face multiple problems:
When I run the given code directly, I get images of YCB (Cracker box) as per the example but the label/semantic information is not available in the lable.png. Running the same code on my task, I face the problem of not getting the objects bounding box and the semantic label even when it is visible in the Tree in ISaac Sim. However the RGB and depth image are obtained.
The rendering of the models in USD is not high definition, however, the same .obj file has good rendering in Blender. Can you share what could be the problem.
Images generated with 1 camera have only half frame visible in RGB output, but depth image shows all the information. The label image still is completely black.(Images added). I have a PC with multiple RTX 4090 GPUS and doing this same code with 1 gpu generated a better RGB(still no semantic info)
Label.png
I was able to solve the Semantic info problem for colorized semantic by comparing YCBVideoWriter to BasicWriter and modifying it, however the semantic info is not visible in non-colourised format.
I made some changes in YCBwriter and the offline pose generator for my use case:
created modifiedYCBWriter with changes of adding many variable of semantic info and in the semantic functions# I combined the semantic segmentation writer from BasicWriter and added some lines in:
(LINE 129) if semantic_segmentation= added colourise segmentation parameter, which if set true causes colourised semantic data but if set False, gives just Black images, no non grey semantic which could be a problem based on YCB Dataset
(LINE 270)def_write_semantic_segmentation()
(LINE 70+) Added some variables similar to BasicWriter to solve the semantic data loss in YCB writer
in offline pose gen.py: I added YCB Writer in this format:
def _setup_writer(self):
self.writer = rep.WriterRegistry.get(“modifiedYCBwriter”)
self.writer.initialize(
output_dir=self._output_folder,
num_frames=self.train_size,
semantic_types = [“class”], # This caused the data loss problem to the writer when it was just “class” instead of [“class”]
rgb=True,
bounding_box_2d_tight=True,
semantic_segmentation=True,
distance_to_image_plane=True,
pose=True,
colorize_semantic_segmentation = True,
class_name_to_index_map=config_data[“CLASS_NAME_TO_INDEX”],
factor_depth=10000,
intrinsic_matrix=np.array(
[
[config_data[“F_X”], 0, config_data[“C_X”]],
[0, config_data[“F_Y”], config_data[“C_Y”]],
[0, 0, 1],
]
),
)
self.writer.attach([self.render_product1, self.render_product2, self.render_product3])
High definition 3D models were still not the same visual rendering as Blender, but I combined the models into a single mesh for save_vertices function, which does show the model have slightly better visuals but the rgb images show them grainy/with grey dots all over the mesh
When running multiple cameras, a sample box,cube,sphere code gives good rgb images for all camera outputs on Windows using Isaac Sim script writer, but not on Ubuntu. I still get one or two camera rgb images as checkered pattern, meanwhile the depth images show the information correctly(This is mostly due to dual GPU, running same code on 1 gpu did not cause this issue, but after installing another same GPU, the rgb images are checkered)
Please let me know if anything else is required.
Thank you