Test video condotion

Hello,I use this web(https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-custom.md) to test video. I have a problem. How can I solve it? This is my test running condition:

bmw@bmw-desktop:~/jetson-inference/python/examples$ ./imagenet.py 2.mkv 23.mkv \

jetson.inference – imageNet loading network using argv command line params

imageNet – loading classification network model from:
– prototxt /home/bmw/jetson-inference/python/network/20200820/deploy.prototxt
– model /home/bmw/jetson-inference/python/network/20200820/snapshot_iter_9500.caffemodel
– class_labels /home/bmw/jetson-inference/python/network/20200820/labels.txt
– input_blob ‘data’
– output_blob ‘softmax’
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/bmw/jetson-inference/python/network/20200820/snapshot_iter_9500.caffemodel.1.1.5106.GPU.FP16.engine
[TRT] loading network plan from engine cache… /home/bmw/jetson-inference/python/network/20200820/snapshot_iter_9500.caffemodel.1.1.5106.GPU.FP16.engine
[TRT] device GPU, loaded /home/bmw/jetson-inference/python/network/20200820/snapshot_iter_9500.caffemodel
[TRT] Glob Size is 18017556 bytes.
[TRT] Added linear block of size 1605632
[TRT] Added linear block of size 1204224
[TRT] Added linear block of size 401408
[TRT] Added linear block of size 401408
[TRT] Deserialize required 8455668 microseconds.
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 68
[TRT] – maxBatchSize 1
[TRT] – workspace 0
[TRT] – deviceMemory 3612672
[TRT] – bindings 2
[TRT] binding 0
– index 0
– name ‘data’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3 (CHANNEL)
– dim #1 224 (SPATIAL)
– dim #2 224 (SPATIAL)
[TRT] binding 1
– index 1
– name ‘softmax’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 5 (CHANNEL)
– dim #1 1 (SPATIAL)
– dim #2 1 (SPATIAL)
[TRT] binding to input 0 data binding index: 0
[TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 softmax binding index: 1
[TRT] binding to output 0 softmax dims (b=1 c=5 h=1 w=1) size=20
[TRT] device GPU, /home/bmw/jetson-inference/python/network/20200820/snapshot_iter_9500.caffemodel initialized.
[TRT] imageNet – loaded 5 class info entries
[TRT] imageNet – /home/bmw/jetson-inference/python/network/20200820/snapshot_iter_9500.caffemodel initialized.
[gstreamer] initialized gstreamer, version
[gstreamer] gstDecoder – creating decoder for 2.mkv
[gstreamer] gstDecoder – failed to discover stream info
[gstreamer] gstDecoder – try manually setting the codec with the --input-codec option
[gstreamer] gstDecoder – failed to create decoder for file:///home/bmw/jetson-inference/python/examples/2.mkv
Traceback (most recent call last):
File “./imagenet.py”, line 58, in
input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)
Exception: jetson.utils – failed to create videoSource device


The error occurs when jetson_inference try to get the stream info of your input video:

Would you mind to specify the codec with --input-codec to see if helps first?