I want to set up a stereo camera so I can simulate the disparity. I think in an older version there was a disparity sensor but I can’t seem to find it now.
What I have done so far is set up two identical cameras, a left and a right. I have offset the right camera and the cross linked the two by giving their prim names.
Is this all I need to do to get a disparity sensor?
And if so, how do I display the results so I can check the output prior to creating the simulated data?
I was hoping that the disparity ground truth could be generated much the same way we can generate semantic segmentation mask rather than a estimate. Is there plans to add ground truth disparity outputs?
If setting the cameras as ‘stereo’ doesn’t give provide a disparity output can you explain the purpose of the cross-linking them? What is the difference between doing the cross linking and having two unlinked cameras if I need to calculate the disparity separately anyway? Especially since I can get the distance to the camera either as a separate output.