Hello AI World - new object detection training and video interfaces

Hi everyone,

Happy to have just merged a significant set of updates and new features to jetson-inference master:

note: API changes from this update are intended to be backwards-compatible, so previous code should still run.

The same code can now run from images/video/cameras and encoded network streams (RTP/RTSP):

import jetson.inference
import jetson.utils

import argparse
import sys

# parse the command line
parser = argparse.ArgumentParser(description="Locate objects in a live camera stream using an object detection DNN.")

parser.add_argument("input_URI", type=str, default="", nargs='?', help="URI of the input stream")
parser.add_argument("output_URI", type=str, default="", nargs='?', help="URI of the output stream")
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")

try:
	opt = parser.parse_known_args()[0]
except:
	print("")
	parser.print_help()
	sys.exit(0)

# load the object detection network
net = jetson.inference.detectNet(opt.network, sys.argv)

# create video sources & outputs
input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)
output = jetson.utils.videoOutput(opt.output_URI, argv=sys.argv)

# process frames until EOS or the user exits
while True:
	img = input.Capture()
	detections = net.Detect(img)
	output.Render(img)
	output.SetStatus("{:s} | Network {:.0f} FPS".format(opt.network, net.GetNetworkFPS()))
	net.PrintProfilerTimes()

	if not input.IsStreaming() or not output.IsStreaming():
		break

For more info, see these new pages from the repo:

Thanks to everyone from the forums and GitHub who helped to test these updates in advance!

With no dude, your tutorial the most useful tool given by nvidia. At least for me. I hope ROS wrapper take advantage of this new features and a more complete suite of tools.
Thanks a lot

Thanks, you reminded me I need to double-check those tomorrow…they should still build/run, but can probably stand to be updated. I should also look into ROS2 support (hopefully can remain in the same repo/branch).

1 Like

Jetsons are robot friendly, so yeah any improvement with ROS/2 will be great. The bigger lack of ROS this days is the lack of easy integration of deep learning results with ROS video nodes and pointclouds, in example a node to get 3D bboxes with the metric info from pointcloud/depth and give the bbox position should be a great tool. Probably that is done in the last issac, but being realistic the most people buy jetsons to work with ROS than with Isaac and probably that won’t change soon.

OK, ros_deep_learning is working on ROS Melodic again after a few updates. Will look at ROS2 Eloquent next.

1 Like

I am suggesting to add the YOLO 3 and 4 with tiny and full into Jetson Inference. It will big helps for developers.

If you have any note, tutorial or documentation for that please share with us.

1 Like

Hi @wmindramalw, there is a YOLO sample included with TensorRT found under /usr/src/tensorrt/samples/python/yolov3_onnx.

I prefer to stick with SSD-Mobilenet in the jetson-inference tutorial since it is trained in PyTorch, which the rest of the training tutorial uses and can be run on the Jetson.