Collected synthesized image to jpg file

Hello,

In object training pipeline, is it possible to extract the collected synthesized data images in the dataset to .jpg images for later check?

I found this below, as stated here https://docs.nvidia.com/isaac/isaac/packages/ml/ml.html#tensors.
In the Isaac SDK, Tensor data is stored and passed as messages of TensorProto, which is the counterpart of numpy ndarray data used in Tensorflow. Conversion is needed for ML to accommodate other data formats like image. Refer to IsaacSDK/packages/ml/ColorCameraEncoderCpu.cpp for an example.

However I could not find the ColorCameraEncoderCpu.cpp in corresponding location. Is it about the different version of SDK? I am using Isaac 2019.2 version with Unreal Engine.

Thanks!

BR,
Rosy

Hi Rosy, We have since switched over to Unity 3D. Yes, it is possible to collect synthetic images to images for training a network later.

Could you please consider using 2019.3 --> https://developer.nvidia.com/isaac/downloads

Also, we are working on a container to help you with ML training. Kindly stay tuned…

Hi shrinv,

Thanks for your information. I’ve downloaded the 2019.3 version, however it seems that it does not contain ColorCameraEncoderCpu.cpp as well. Could you check that? Or maybe you can provide a sample file to do that?

Thanks!

BR,
Rosy

Hi Rosy,
Which example are you following? If you are using the Object Detection training with DetectNet : https://docs.nvidia.com/isaac/isaac/packages/detect_net/doc/detect_net.html
generate_kitti_dataset already outputs the training images as a .png file.

The application for generating the kitti_dataset is under packages/ml/apps/generate_kitti_dataset/generate_kitti_dataset.app.json
If you don’t necessarily want to generate the dataset in the kitti format and only want to save the files, you can also build your own sample to save the image coming from the simulation.
You can take the generate_kitti_dataset as an example, but instead of having

{
“source”: “simulation.interface/output/color”,
“target”: “crop_and_downsample/CropAndDownsample/input_image”
},
{
“source”: “crop_and_downsample/CropAndDownsample/output_image”,
“target”: “generate_kitti_dataset/generate_kitti_dataset/image”
},

You change the target from the color output or from the cropped sample to your specific codelet input.

We have a lot of utils methods for dealing with images under engine/gems/images. Ex: io.hpp/io.cpp has a SavePng and SaveJpeg method.

In this thread reply Getting image data from DepthImageProto you can also see a small snippet with an example on how to take a ColorCameraProto and save it as a .png file.

Let me know if you need further help. Thanks

Regarding ColorCameraEncoderCpu, it is not open-source at the moment, that’s why you cannot find the file. You can check its basic functionality in the API documentation: https://docs.nvidia.com/isaac/isaac/doc/component_api.html?highlight=colorcameraencodercpu#isaac-ml-colorcameraencodercpu
It takes a CameraColorProto (RGB image) and outputs a TensorProto.

Hello Teresa,

Thank you for your reply.

Does isaac sdk 2019.3 also support Unreal Engine 4, or is it possible to extract the data to images using isaac sdk 2019.2, technically?

My target is to test domain randomization and how it may use to real environment. For now our scene has been built based on Unreal engine 4, it might be a large work for us to migrate to Unity (and I know you will migrate to Omniverse soon…).

Thank you!

BR,
Rosy

No, Isaac SDK 2019.3 only supports Unity.
I haven’t tried the workflow myself, but technically you should be able to do it with UE4 + Isaac SDK 2019.2. UE4 also outputs a ColorCameraProto. You can have a look at the example for yolo_training with UE4 under //apps/samples/yolo.

My problem solved as your suggested. Thanks Teresa!