Hi,
Visionworks doesn’t read camera directly. Instead, it use low-level api(Ex.v4l or gstreamer) and then wrap the buffer into vx_image.
For argus v.s. Visionworks, or more precisely, for argus v.s. gstreamer,
they are two way to get camera source on tegra and the low-level implementation is the same(all nvcamera).
For data path, please check this comment:
[url]Finding the bottleneck in video stitching application - Jetson TX1 - NVIDIA Developer Forums
Thanks.