I want to feed a video file as input for detectnet instead of a camera input of jetson nano. Please share me the code if already available. Else let me know the changes to be done to modify existing


1 Like

Hi Shankar, see these forks of jetson-inference which integrate video playback:

You may also have a look to this post.
[EDIT: This is obsolete now.Check this.]

Thanks Dusty.

I got another previous post related to this and used as below.

cam= cv2.VideoCapture(“lawn1.avi”)

ret, frame =
frame_rgba = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
width = frame.shape[1]
height = frame.shape[0]
img = jetson.utils.cudaFromNumpy(frame_rgba)

detections = net.Detect(img, width, height, opt.overlay)


Hi Shankar,
I tried the solution you provided, but it didn’t work for me. Basically the last line of code did not work.
detections = net.Detect(img, width, height, opt.overlay)
Exception: jetson.inference – detectNet.Detect() failed to parse args tuple
I also tried replacing the img with frame_rgba, which resulted in the same error.
I would like to know if you’ve succeeded with this approach.

Hi Xingtao, are you sure width and height are of type int and opt.overlay is a string? You would get that error if one of the arguments was not of the expected type. You would want to use it with img, not frame_rgba.

Thank you Dusty for your response! The code works on the master repo, but I was on the branch that aconchillo provided, and I didn’t thought there would be a difference. Thanks again, Xingtao

Hi, I would like to try the fork of jetson-inference that supports RTSP for jetson.utils.gstCamera(). I’m not sure how I am supposed to request the forks mentioned as when I list the forks using:

git branch -a

which shows branches such as remotes/origin/20200714 and remotes/origin/L4T-R32.1 …

I don’t see anything that means much (to me). Which one should I be looking at?



You may have a look to:

for branches.
I think that new video sources support such as RTSP came at the end of June 2020, so probably 20200714 is the release you want. You may also check master branch for last improvements.

The 20200714 branch is a backup I saved before I merged all those changes into master (including RTP/RTSP/ect), so 20200714 branch does not have those features. To use RTSP, please use jetson-inference master.

Note that RTSP isn’t through the gstCamera interface, it is through the videoSource interface (which wraps gstCamera, gstDecoder, imageLoader, ect - gstDecoder is the one that implements RTP/RTSP streaming). For more info, please see here:

Ahh, I was trying to access RTSP through jetson.utils.gstCamera() when I should have used jetson.utils.videoSource().

This is a problem however as I would like to use a feature of gstreamer to specify a latency of zero, something I haven’t found a way to do with videoSource(). I’ll start a new post though as this is now off topic. Thanks.