Implementing a custom camera model/projection?

Hi,

We would like to simulate a physical camera using a specific camera model (e.g. Mei model as in the cv.omnidir module, including distortion) for generating GT data. We’ve tried that in Unity, but although we could simulate the camera using a cubemap texture + custom shader, we have not succeeded in making it work with segmentation annotations and depth.

I’ve seen there are quite a few fisheye projection models available in Omniverse, which seems promising :
https://docs.omniverse.nvidia.com/app_audio2face/prod_materials-and-rendering/cameras.html#fisheye-properties

I’m wondering however if it is possible at all to implement a custom projection model and if so, roughly how one would go about doing that.

Thanks!

Hi @david.provencher! Welcome to the forums. Extending the camera schema in USD would be doable, but the question would be if the renderers would know how to use it. I’ve reached out the the RTX team to see what they think.

Hi David. RTX team says that the “fisheyePolynomial” ProjectionType should get you close to what you want. Here’s a blog post talking about camera models used for DRIVE Sim: https://developer.nvidia.com/blog/validating-drive-sim-camera-models/. We’re planning on making this aspect easier to work with.

Thanks, for the reply!
I have just cursorily scanned the post, but will look into the polynomial model and how to calibrate our cameras using that model and whether it is suitable. If I understand correctly, what I was talking about is not currently doable (implementing our own model), correct?

Correct. We would need to open it up for that type of customization, but definitely check out the team’s suggestion as that may suit your needs for now.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.