I’m trying to build EdgeDetector using Python API, now I have decomposed the whole application into 3 different components i.e Camera, EdgeDetector and Viewer. Camera takes in the input image, EdgeDetector runs the logic part on it and in Viewer I’m trying to showcase the result. I’m following communication protocols(ImageProto, Rx, Tx). But I’m not able to understand how to connect result from Viewer to Sight. Now I can see my end result(in Viewer Codelet) using OpenCV. But I can’t understand how to connect to Sight. Can someone help me out here @nvidia?

