This is getting very confusing indeed. There is no squeezing step, and there is no “fall out of the camera sensor range rather than squeeze anything”. I am not sure how to explain this better, but I’ll try.
Let’s go back to basic: in a pinhole camera model there are only 2 information that counts: field of view and the frame aspect ratio. Focal length and the horizontal film aperture are used to compute the horizontal field of view (aka as angle of view). Such field of view is exactly the same for a pinhole camera and for a fisheye camera, so the extent of scene each projection model displays horizontally is exactly the same. The difference between the two projections is in how that extent is mapped, and it will affect more drastically what is shown around the image center. Pinhole is mapped according to tan(theta), equidistant according to theta. The extent is the same the distortion is different. Both models show a type of distortion if compared to human vision, I hope there are no questions about this.
So here is a picture showing the same scene with different values of focal length, from 30mm down to 0.5mm. The left column shows pinhole cameras, the right column shows the exact same focal lengths on equidistant projections. Placed in front of the camera is a red sphere. The camera position and everything in the scene is fixed, only the focal length changes.
If we compare to human vision, the equidistant fisheye lens looks less distorted and more natural than pinhole. Agreed?
With relatively long lenses, the difference is minor, to the same extent than tan(theta) and theta are similar for low values of theta, correct?
When the angle of view increases, the visual distortion produced by pinhole camera becomes more and more apparent and unnatural. Look at the last row, which is some extreme wide-angle lens: the equidistant projection still looks natural, the sphere still looks like it is relatively near the camera. Conversely, the pinhole camera looks completely artificial and massively distorted: the sphere in front of the camera is barely visible and is impossible to tell how far from the camera that may be by looking at the picture, “as if the center of the image is squeezed in a miniscule spot” and what is at the edge of the screen is massively stretched. Do you agree that is the visual effect produced by a pinhole camera?
You can try this in any 3d editor where you have control over the camera field of view, approaching 180deg of field of view, this is what you get.
Now, let me reiterate. Unfortunately, I don’t know why OpenCV seems to crop the result. We will try to find somebody here that knows OpenCV, who may be able to answer your specific question. But here is an observation: the fact that OpenCV process the equidistant lens produced by Omniverse, and by undistorting it gives an image showing objects defined by straight lines, this makes me think that the undistortion is done correctly, but the algorithm cropped the image. If there was a bug in the Omniverse equidistant projection, the undistortion in OpenCV would produce an image where 3d lines would still be curvy/wobbly in some different way.