Data from Hello World! jetson-inference tutorials to Python

Let me say in advance that I feel like I should know the answer to this, but I have somehow missed a very simple concept in the tutorials.

I would like to use the basic scripts from the tutorials to provide simple x,y screen coordinates of each detected object in the streaming detections tutorial to my python program.

I have written a simple demo.py program that shows a basic text menu of all the available models installed on our TX2. I can make all the Tensorflow models run, I get appropriate output on the display (I flipped the image from the TX2).

I see the framerate reported in the OpenGL window, and I certainly understand the syntax and how that happens.

But I am missing some basic concept to allow me to do the following pseudo Python:

def get_objects_in_frame():
    some code here
    # Create a list of dictionaries - ObjID, x y coordinates for each Obj and its class
    current_objs = [ { obj_id : [ screen_x, screen_y, obj_class ] } ]
    return current_objs

Any nudge in the right direction would be super helpful. And hopefully using Python not C++. Thanks!

Hi,

Are you finding the bounding box output of jetson_inference?
If yes, you can find the value here:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/detectnet-console.py#L53[/url]

Thanks.

Yep - that does it, and I found another of Dusty’s tutorials that nailed it as well. Don’t know how I missed those resources the first 25 times I looked.

Thanks!