I’m currently working on a project that requires object detection to later be implemented with ROS and a robot, so I have been following the Hello AI World tutorials from Dusty (Jetson AI Fundamentals - S3E5 - Training Object Detection Models - YouTube). I have also been reading through the Github (GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.) and the NVIDIA forums.
I have made my own custom dataset, trained the model, and ran DetectNet. All is running smoothly, and I can see the information of the bounding box running in the terminal. The information I am referring to is the one mentioned by Dusty in this forum (DetectNet Methods - #7 by dusty_nv).
Since the bounding box information changes frame-by-frame, I wanted to collect all the data in either a text file or an array/list, so that I could use it for further calculations. However, I do not know how to actually write the bounding box data from the livestream to a text file or an array/list.
Any code, instruction, or help would be much appreciated as I am a struggling MechE student trying to learn this new topic. Thank you!
Hi @MechE_Learning_CS, the member variables of the detections array is listed in the Python API reference docs here, under the
Detection = <type 'jetson.inference.detectNet.Detection'>
Object Detection Result
Data descriptors defined here:
Area of bounding box
Bottom bounding box coordinate
Center (x,y) coordinate of bounding box
Class index of the detected object
Confidence value of the detected object
Height of bounding box
Instance index of the detected object
Left bounding box coordinate
Right bounding box coordinate
Top bounding box coordinate
Width of bounding box
So you can use these members as you wish, whether it’s to an array, file, ect. For example the following creates a list of tuples containing the class ID’s and bounding box coordinates:
detections = net.Detect(img)
my_list = 
for det in detections:
my_list.append( (det.ClassID, det.Left, det.Top, det.Right, det.Bottom) )
Hope that helps!
Thank you so much for the information and the quick reply! It helps a ton!!
Just one more question: When should I implement this code? I understand making a similar Python script and using the python3 command in the terminal. However, if my camera is running DetectNet continuously, should I input the code before turning on DetectNet? Can I input it mid-stream?
In the meantime, I’ll give a crack at writing a script and seeing where I can get with it. Thanks again!!!
No problem - I would recommend creating your own copy of detectnet.py with your modifications, and then to run that. Each time you make a modification to the Python script, you need to re-run it. It’s an iterative development process that generally goes like:
make coding changes → run/test → exit the script → make more coding changes → ect
I see that makes a lot of sense! I’ll throw in that loop somewhere in that original 10-line code, let it run, make adjustments, and so forth. I’ll make sure to report back with progress!
Just a note - for new projects, I would recommend deriving your code from detectnet.py, as it uses newer APIs from jetson.inference than the ‘Object Detection in 10 Lines of Code’ did. That ‘10 Lines of Code’ will still run, but the gstCamera/glDisplay Python APIs it uses aren’t being updated anymore (in lieu of the videoSource/videoOutput APIs that you see in detectnet.py)
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.