ISP feature description


I want to use ISP features in an application. To fully utilize the hardware module I would like to know what features are supported by the ISP and by Argus API (if there are some differences). For a concrete application it is also useful to know what strengths and weaknesses the implemented algorithms have and in which range I can apply parameters.
I really don’t want to know how these algorithms are implemented, it’s only nice to know what you can expect.

Unfortunately the API documentation (Tegra Multimedia API, Release 32.2) doesn’t provide the information I need to gather.

Is there some more detailed information about the implemented operations as “various image pre-processing”?

Thank you for any reply.

Best Regards


The ISP engine only works with Bayer sensor. You may follow sensor driver programming guide to bring up your Bayer sensor, or check camera modules from our partners.

Hi Archoran,
e-con Systems is one of the camera partners of Nvidia who provide camera solutions for Jetson TX2/TX1/Nano and AGX Xavier boards. You can have a look at all the camera modules we provide for the aforementioned Boards.
We also provide complete documentation including the product, application and features supported along with sample applications which can be readily ported to meet your use case.

For more details, kindly check the website provided in my signature. You can also contact for more information.

Hi Dane,

The sensor I use is the OV5693 provided with Jetson Development Kit. I think, that the features from ISP are correctly implemented there since I did not change the driver from the sensor. (I can’t say that with certainty since there is neither a documentation for this driver, nor for ISP.)

What I want to know is, what you can do on user side. The ISP and camera interface provide features that you can use in an application based on the right interfaces. But I can only use the features if I know what features that are and how I have to configure the API(s) to use them. The documentation of the Argus / Multimedia API did not answer my question.

Are there some information you can provide about the features of the ISP?
Like… In what ranges I can expect the bin sizes of a sharpness map? In which way does the Denoise Modes slow down my imaging pipeline? Are there any statements about the latency of ISP pipeline? Or the maximum number of extensions you can activate without overloading the module?

Thanks in regards,

hello Archoran,

all features related to ISP were protected content.
we cannot expose what’s internal algorithms were used in public discussion thread.

please contact with Jetson Preferred Partner for further step.

Hi JerryChang,

Like I saied before, I do not want to know what internal algorithms were used - I only want to use them and for that I need to know if I have to, for example, stay in specified ranges.

I understand, that you do not want to provide information about the algorithms because they give a competitive advantage… But since there is no specification about how to use the hardware elements correctly (ISP but also VIC) I have to find a way around them…

Thank You for Your answer.


Libargus Camera API is listed and explained in
For using VIC engine, please refer to NvBufferTransform() and NvBufferComposite()

Also after flashing the system through sdkmanager, you will see samples in

Thanks, Dane, for your answer.

So the ISP functions can be used by request settings of argus?
What does the bypass function do? (setPostProcessingEnable(bool))
And what exactly does it enable or disable?

The VIC is only accessible via NvBuffer API?
Has NvBuffer an interface for EGLStream-Postprocessing?

I already checked the sample file and built my own interface around argus implementation to match with other APIs I have.

hello Archoran,

please check the brief camera sensor pipeline as below.
Sensor(raw) -> CSI-> VI-> ISP(yuv)-> [Post-Processing]-> Encoder/Decoder-> Display

ISP engine handle lens shading, optical black, demosaic, pixel correction, tone map…etc
ISP also handle the processing to convert bayer sensor to output YUV format.

this setPostProcessingEnable() function configure the feature controls before sending capture frame to next stage.
for example,
you could check the argus samples, (argus/samples/denoise/main.cpp), setup low-light environment, by enable/disable the flag and compare the difference of the preview stream.


Yes, if you use tegra_multimedia_api.
If you use gstreamer, it is implemented as nvvidconv plugin.

Not sure what function EGLStream-Postprocessing is. Please share more about it.
By the way, you can get EGLImage through below function calls:

// Create EGLImage from dmabuf fd
ctx->egl_image = NvEGLImageFromFd(ctx->egl_display, fd);
if (ctx->egl_image == NULL)
    ERROR_RETURN("Failed to map dmabuf fd (0x%X) to EGLImage",

// Running algo process with EGLImage via GPU multi cores

// Destroy EGLImage
NvDestroyEGLImage(ctx->egl_display, ctx->egl_image);
ctx->egl_image = NULL;


Thank you for the information! I will check out the sample.


Oh, that is a fascinating information! Actually I want to process the image via GPU so I need an interface for the buffer to handle the output as an EGL component. Is there a way the buffer can work as EGLStream producer?

Thanks in regards!

You may refer to the following Argus samples