Cannot run the application locally

I was working on Edge Detection using Stereo Camera. I have used Python API to write Codelets. And while doing so, I somehow managed to get the whole application running. Now when I run it, system’s camera window pops up and detects edges live stream, but in localhost:3000, it’s showing that the channel is empty. Since I’m new to ISAAC, I’m not able figure out what’s going wrong.

Application File



Build File

JSON File



I also think I am running it wrong way, by skipping tick function and directly using start() to edge detection. I know I should be using tick_on_message, but I can’t understand how it works, can someone please help me on this? @nvidia

Your application is actually relying on OpenCV to handle both camera capture, processing (flip), and display. This mostly bypasses the Isaac SDK Engine and the other components, so Sight has no visibility into what is going across. The Isaac Scheduler brings up your EdgeDetector codelet which just never finishes starting up from its perspective.

You’ll want to decompose this into at least three components, each in their own node, connected by edges between them. The first is the “camera” which will handle capture and emit an ImageProto which will be delivered through an edge to the “edge_detector” component. The edge_detector, which was configured to tickOnMessage() in start(), will run its tick() method which will make OpenCV calls and then emit an image. That image will get sent through an edge by the “viewer” component which is what Isaac Sight has access to.

In 2020.2, you can take a look at //sdk/apps/tutorials/opencv_edge_detectionif you haven’t for the C++ version of what you’re trying to do (the Python version will not be very different).

1 Like

@hemals I did try the C++ Version of Edge Detection, but got confused with ImageProto. Anyways I will try doing it the way you described. I will get back here with results. Thank You!

@hemals Is it necessary that I should decompose everything in JSON File, because when I tried doing that I got some error. So I connected the components in the main python file and it worked.

No, it isn’t necessary to decompose into smaller components, but highly encouraged. Isaac SDK helps you decompose your application into cohesive, isolated components that only communicate with each other through messages passed over edges. The JSON file describes the application graph of nodes (processes), their constituent components, and the connections (edges) between them.

Could you explain what you mean by “connected the components in the main Python file”? If you’re thinking of a single Isaac SDK component with a start() method that does little and a tick() method that performs the block under your while loop, that should work fine too.

Okay so I understood the first part and now coming to the “connected the components in main Python File” part, I guess I misinterpreted, as it was connecting message channels using connect() in main py file. And I have got some basic idea of how the flow works now, so Thank You @hemals

You’ll want to decompose this into at least three components, each in their own node, connected by edges between them.

so @hemals with this do you mean that I should create three seperate codelets i.e Camera, EdgeDetector & Viewer in the main py file?

It would be better to separate each PyCodelet into separate Python files, but conceptually, yes.

@hemals I’m worried that I might need more help. Now I’m trying to decompose everything, and I have a codelet called Camera,which captures image/s. I’m running tick() periodically and inside tick() I’m capturing and reading image. But then how should I pass it to EdgeDetector Codelet, like I have created proto_tx & proto_rx. Here’s a screenshot
Selection_024

If this is the Camera codelet, then you wouldn’t need an rx for InputImage, only the tx for Image. In start(), you would setup “cap” instead of in tick(), and in tick(), you can call self.cap.read(), populate an ImageProto message and publish it on self.tx.

@hemals I understood the publishing part using self.tx, but what is populate an ImageProto message?
Selection_025

You need to copy the data from the native cv frame to the ImageProto and publish.

@hemals Here the self.tx of Camera Codelet is getting published, but when I run the application, it’s looping in Camera’s tick method(tick_periodically), but EdgeDetector Codelet’s tick() method is not responding anything
Selection_026



And if that’s correct, how should I pass in the image(self.tx in Camera Codelet) in EdgeDetector’s tick() method, where in it detects edges?

I understood why EdgeDetector’s tick() method is not working. Reason being self.rx.message was None. I don’t know how to fix this. I’m transmitting the ImageProto from Camera’s tick() method.

@hemals I have now decomposed and connected all the components. Also added the RX & TX part. I’m able to transmit the message, but I can’t receive it in another codelet.




The Image was getting captured and got no error before/after publishing(in Camera Codelet’s tick() method). Used the same tag(InputImage) for both transmitting and receiving the message. I used tick_on_message(self.rx). It seems like EdgeDetector’s tick() method is getting skipped. I need some help. Am I doing it the wrong way? Is there any workaround for this issue?

@hemals I have now done everything right except the last part, I’m not sure how I can connect my output to Isaac Sight. In the viewer node, I have a seperate codelet called Viewer and ImageViewer component. I have asked the same here (How to connect final output to ISAAC Sight?)

Replied in your other thread. Your Viewer codelet is not adding much here, easier to just use the SDK ImageViewer codelet directly.