I am looking for profiling information on ORIN that describes its parallel processing capability during model-inference.
How does ORIN handle say ‘n’ sensors whose perception stack (raw message processing + ML model prediction) need to run in parallel. Does a designer have to attach specific processing segments to GPUs?
Please check nsys(nsight systems) for profiling.
We have several processing units like VIC,NVENC,PVA,DLA,GPU etc for processing. You may use NvMedia/DW APIs to use VIC/NVENC/PVA for image processing operations and use TensorRT for deep learning operations( it uses GPU/DLA). You may also use GPU for any other computations like image processing operations. You can design your workflow to make to use of these HW blocks and perform operations in parallel.