I am working on a Jetson TX2 development board, capturing images from the built-in camera with the Argus API. Depending on the user’s configuration, I capture several resolutions at the same time, using multiple output streams. I configure the sensor resolution to match the largest requested resolution, but there will usually be scaling happening on one of the resolutions regardless of that.
I am wondering if there is any way I can specify the scaling algorithm to be used by the ISP, or if NVIDIA could point me to any relevant documentation on what is implemented there. I wasn’t able to find anything in the JetPack documentation I have, but I might be looking in the wrong place.
Thanks for the information. I am using those APIs to set the desired parameters. However, what I was wondering, is if there is any information on the actual scaling algorithm used by the ISP when it is downscaling the video.
For instance, I set the sensor mode 2592x1458 and set the output stream resolution to 1920x1080. This works fine, and the ISP is scaling down to make the 1080p output, using some algorithm. Are there any details about the filtering used in this algorithm, or what the actual choices of algorithm are?
Thanks for confirming that it is non-public. I wouldn’t have thought resampling/scaling algorithms would have been a trade secret given that there are only so many to choose from to provide a given result. TI for instance documents the scaling algorithms available on several of their SoCs that I have worked with.
The reason I ask for this information is that we are putting together technical documentation for a product we are designing. Being able to specify how image sensor pixels are mapped on to resulting output frame pixels, at various resolutions, would help us show that a feature we are implementing does what we say it does. At this time we aren’t certain since we have no documentation.