Hello, so I am figuring out basics of Jetson Nano and using detectnet. I am following the tutorial video on object detection, and was able to detect objects on still images. But when I went to next step with video files, it would not display the video while trying to use detectnet. I am currently running directly through the Jetson Nano hooked up to display monitor. I am unsure exactly of the problem, whether it is through gstreamer, detectnet code, or directory problem? For reference I was going through exact steps as in the tutorial video on object detection, and as it was processing, I an got error message saying that detectnet fails to capture video frame. Any ideas to what the problem occurring is?
Hi,
Could you share the jetson-inference output log with us first?
Thanks.
Here is the whole output log:
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – creating decoder for /videos/pedestrians.mp4
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder – discovered video resolution: 960x540 (framerate 29.970030 Hz)
[gstreamer] gstDecoder – discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)960, height=(int)540, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=/videos/pedestrians.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from file:///videos/pedestrians.mp4
gstDecoder video options:
– URI: file:///videos/pedestrians.mp4
- protocol: file
- location: /videos/pedestrians.mp4
- extension: mp4
– deviceType: file
– ioType: input
– codec: h264
– width: 960
– height: 540
– frameRate: 29.970030
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000
[OpenGL] glDisplay – X screen 0 resolution: 2560x1080
[OpenGL] glDisplay – X window resolution: 2560x1080
[OpenGL] glDisplay – display device initialized (2560x1080)
[video] created glDisplay from display://0
glDisplay video options:
– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 2560
– height: 1080
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000
detectNet – loading detection network model from:
– model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
– input_blob ‘Input’
– output_blob ‘NMS’
– output_count ‘NMS_1’
– class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
– threshold 0.350000
– batch_size 1
[TRT] TensorRT version 8.0.1
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 230, GPU 1934 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8001.GPU.FP16.engine
[TRT] loading network plan from engine cache… /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8001.GPU.FP16.engine
[TRT] device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 264, GPU 1940 (MiB)
[TRT] Loaded engine size: 34 MB
[TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 264 MiB, GPU 1940 MiB
[TRT] Using cublas a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU -17, now: CPU 440, GPU 1924 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +241, GPU -28, now: CPU 681, GPU 1896 (MiB)
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 681, GPU 1895 (MiB)
[TRT] Deserialization required 7082183 microseconds.
[TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 681 MiB, GPU 1896 MiB
[TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 681 MiB, GPU 1896 MiB
[TRT] Using cublas a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +2, now: CPU 681, GPU 1898 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 681, GPU 1899 (MiB)
[TRT] Total per-runner device memory is 22045696
[TRT] Total per-runner host memory is 136432
[TRT] Allocated activation device memory of size 14261248
[TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 683 MiB, GPU 1900 MiB
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 119
[TRT] – maxBatchSize 1
[TRT] – deviceMemory 14261248
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name ‘Input’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3
– dim #1 300
– dim #2 300
[TRT] binding 1
– index 1
– name ‘NMS’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1
– dim #1 100
– dim #2 7
[TRT] binding 2
– index 2
– name ‘NMS_1’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1
– dim #1 1
– dim #2 1
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT] device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet – maximum bounding boxes: 100
[TRT] detectNet – loaded 91 class info entries
[TRT] detectNet – number of object classes: 91
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse1
[gstreamer] gstreamer changed state from NULL to READY ==> queue0
[gstreamer] gstreamer changed state from NULL to READY ==> qtdemux1
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> qtdemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
(detectnet:11): GStreamer-CRITICAL **: 13:19:15.718: gst_caps_is_empty: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:11): GStreamer-CRITICAL **: 13:19:15.726: gst_caps_truncate: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:11): GStreamer-CRITICAL **: 13:19:15.727: gst_caps_fixate: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:11): GStreamer-CRITICAL **: 13:19:15.727: gst_caps_get_structure: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:11): GStreamer-CRITICAL **: 13:19:15.727: gst_structure_get_string: assertion ‘structure != NULL’ failed
(detectnet:11): GStreamer-CRITICAL **: 13:19:15.727: gst_mini_object_unref: assertion ‘mini_object != NULL’ failed
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 960x544 (x 11), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 960, nFrameHeight = 540
[gstreamer] gstDecoder – onPreroll()
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer message duration-changed ==> h264parse1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ /\ AVC”, bitrate=(uint)720629;
[gstreamer] gstreamer mysink taglist, encoder=(string)Lavf54.63.104, container-format=(string)“ISO\ MP4/M4A”;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629;
[gstreamer] gstDecoder recieve caps: video/x-raw, format=(string)NV12, width=(int)960, height=(int)540, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)jpeg, colorimetry=(string)bt601, framerate=(fraction)30000/1001
[gstreamer] gstDecoder – recieved first frame, codec=h264 format=nv12 width=960 height=540 size=777600
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> queue0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> qtdemux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> filesrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer – allocated 4 buffers (777600 bytes each, 3110400 bytes total)
RingBuffer – allocated 4 buffers (1555200 bytes each, 6220800 bytes total)
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)166633, maximum-bitrate=(uint)166633;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)166633;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1089470;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1291348;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1727472;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1764155;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1797482;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1818821;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)2009670;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)2079200;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)113646, maximum-bitrate=(uint)2079200;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)113646, maximum-bitrate=(uint)21430729;
[gstreamer] gstDecoder – end of stream (EOS)
5 objects detected
detected obj 0 class #1 (person) confidence=0.486816
bounding box 0 (208.828125, 214.365234) (279.140625, 388.388672) w=70.312500 h=174.023438
detected obj 1 class #1 (person) confidence=0.543945
bounding box 1 (249.843750, 224.384766) (315.000000, 386.279297) w=65.156250 h=161.894531
detected obj 2 class #1 (person) confidence=0.859863
bounding box 2 (634.218750, 56.821289) (662.343750, 141.196289) w=28.125000 h=84.375000
detected obj 3 class #1 (person) confidence=0.750000
bounding box 3 (592.968750, 80.024414) (618.281250, 153.852539) w=25.312500 h=73.828125
detected obj 4 class #1 (person) confidence=0.788574
bounding box 4 (676.406250, 65.489502) (700.781250, 138.295898) w=24.375000 h=72.806396
[OpenGL] glDisplay – set the window size to 960x540
[OpenGL] creating 960x540 texture (GL_RGB8 format, 1555200 bytes)
[cuda] registered openGL texture for interop access (960x540, GL_RGB8, 1555200 bytes)
[TRT] ------------------------------------------------
[TRT] Timing Report /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.10506ms CUDA 2.93354ms
[TRT] Network CPU 23005.45508ms CUDA 22993.54883ms
[TRT] Post-Process CPU 133.02391ms CUDA 133.41849ms
[TRT] Visualize CPU 143.18718ms CUDA 143.98921ms
[TRT] Total CPU 23281.77148ms CUDA 23273.88867ms
[TRT] ------------------------------------------------
[TRT] note – when processing a single image, run ‘sudo jetson_clocks’ before
to disable DVFS for more accurate profiling/timing measurements
[gstreamer] gstDecoder – end of stream (EOS) has been reached, stream has been closed
detectnet: shutting down…
[gstreamer] gstDecoder – stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstDecoder – pipeline stopped
detectnet: shutdown complete.
Apolgies for werid link at beginning that is still part of output. Overall goal is to use the python code version of detectnet but was using just detectnet instead of detectnet.py for this example as I was following the video.
Also retried with detectnet.py and got a different error message within rest of similar output log stating: [OpenGL] failed to create X11 Window.
Hi @user99278, it seems that the video files ends early, as opposed to there being an actual error in there. I noticed that this is a copy of the video (/videos/pedestrians.mp4
), is it possible that the copy is corrupted in some way?
Can you try playing these videos with the video-viewer tool first? (video-viewer is similar to detectnet program, but without the DNN inferencing)
video-viewer /usr/share/visionworks/sources/data/pedestrians.mp4
video-viewer /usr/share/visionworks/sources/data/parking.avi
So I tried using video viewer using code that you gave there. But there was an error in doing so:
root@jnyberg-desktop:/jetson-inference# video-viewer /usr/share/visionworks/sources/data/pedestrians.mp4
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – couldn’t find file ‘/usr/share/visionworks/sources/data/pedestrians.mp4’
[gstreamer] gstDecoder – failed to create decoder for file:///usr/share/visionworks/sources/data/pedestrians.mp4
video-viewer: failed to create input stream
What should I do from here?
Do you have the visionworks sample videos from /usr/share/visionworks/sources/data
? If not, where did you get the pedestrians.mp4
that you originally ran the video on?
BTW, if you are using the Jetson Nano Developer Kit, these should have come with the SD card image.
Yes I used the Jetson Nano Developer Kit with the SD card image. And I double checked by following path to make sure those videos were under that directory. I even manually played videos to make sure they are working. But when running it through video-viewer they did not show.
It’s strange, because I can run the exact same command here without issue. I wonder if it’s a permissions issue - what happens if you run sudo video-viewer /usr/share/visionworks/sources/data/pedestrians.mp4
?
ls -ll /usr/share/visionworks/sources/data
total 51364
-rw-r--r-- 1 root root 179920 Oct 31 2017 baboon.jpg
-rw-r--r-- 1 root root 13062944 Oct 31 2017 cars.mp4
-rw-r--r-- 1 root root 185 Oct 31 2017 feature_tracker_demo_config.ini
-rw-r--r-- 1 root root 185 Oct 31 2017 feature_tracker_nvxcu_demo_config.ini
-rw-r--r-- 1 root root 408 Oct 31 2017 hough_transform_demo_config.ini
-rw-r--r-- 1 root root 11169870 Oct 31 2017 left_right.mp4
-rw-r--r-- 1 root root 37268 Oct 31 2017 lena.jpg
-rw-r--r-- 1 root root 3847 Oct 31 2017 mask_all.png
-rw-r--r-- 1 root root 2021 Oct 31 2017 mask_center.png
-rw-r--r-- 1 root root 2849 Oct 31 2017 mask_none.png
-rw-r--r-- 1 root root 76 Oct 31 2017 motion_estimation_demo_config.ini
-rw-r--r-- 1 root root 276 Oct 31 2017 object_tracker_nvxcu_sample_config.ini
-rw-r--r-- 1 root root 10637716 Oct 31 2017 parking.avi
-rw-r--r-- 1 root root 1063970 Oct 31 2017 pedestrians.h264
-rw-r--r-- 1 root root 1264869 Oct 31 2017 pedestrians.mp4
-rw-r--r-- 1 root root 15125380 Oct 31 2017 signs.avi
-rw-r--r-- 1 root root 180 Oct 31 2017 stereo_matching_demo_config.ini
Also can you try running video-viewer on the test video from this section of the docs?
https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-console-2.md#processing-a-video
So I tried using sudo code to no avail:
root@jnyberg-desktop:/jetson-inference# sudo video-viewer /usr/share/visionworks/sources/data/pedestrians.mp4
bash: sudo: command not found
But I did try to download the test vid from other section and went through same setup like you did in video and got that to work! Progress! So does this mean there i an issue with permission?
Ah okay, it appears that you are running it from inside container? If so, that makes sense then with the paths. Do you have /usr/share/visionworks/sources/data/
mounted to /videos
when you start the container?
What does it show when you do ls -ll /videos
inside the container?
I tried sudo inside and outside container without luck if I understand you correctly. But yes I’m working from inside the container because it seems it’s the only way for me to access the tools like detectnet and video-viewer. And when I originally followed your video I had the path mounted in /videos. Though I did not have it mounted during last few attempts we’ve been doing. I just mounted them in and tried to run video-viewer again but nothing:
root@jnyberg-desktop:/jetson-inference# video-viewer /usr/share/visionworks/sources/data/pedestrians.mp4
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – couldn’t find file ‘/usr/share/visionworks/sources/data/pedestrians.mp4’
[gstreamer] gstDecoder – failed to create decoder for file:///usr/share/visionworks/sources/data/pedestrians.mp4
video-viewer: failed to create input stream
Actually I was now able to get video-viewer to work through /videos directory. But detectnet still does not work through /videos, giving output:
root@jnyberg-desktop:/jetson-inference# detectnet /videos/pedestrians.mp4
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – creating decoder for /videos/pedestrians.mp4
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder – discovered video resolution: 960x540 (framerate 29.970030 Hz)
[gstreamer] gstDecoder – discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)960, height=(int)540, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=/videos/pedestrians.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from file:///videos/pedestrians.mp4
gstDecoder video options:
– URI: file:///videos/pedestrians.mp4
- protocol: file
- location: /videos/pedestrians.mp4
- extension: mp4
– deviceType: file
– ioType: input
– codec: h264
– width: 960
– height: 540
– frameRate: 29.970030
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000
[OpenGL] glDisplay – X screen 0 resolution: 2560x1080
[OpenGL] glDisplay – X window resolution: 2560x1080
[OpenGL] glDisplay – display device initialized (2560x1080)
[video] created glDisplay from display://0
glDisplay video options:
– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 2560
– height: 1080
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000
detectNet – loading detection network model from:
– model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
– input_blob ‘Input’
– output_blob ‘NMS’
– output_count ‘NMS_1’
– class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
– threshold 0.500000
– batch_size 1
[TRT] TensorRT version 8.0.1
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 229, GPU 1896 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8001.GPU.FP16.engine
[TRT] loading network plan from engine cache… /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8001.GPU.FP16.engine
[TRT] device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 285, GPU 1886 (MiB)
[TRT] Loaded engine size: 56 MB
[TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 285 MiB, GPU 1886 MiB
[TRT] Using cublas a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +10, now: CPU 461, GPU 1902 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +241, GPU +41, now: CPU 702, GPU 1943 (MiB)
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 702, GPU 1936 (MiB)
[TRT] Deserialization required 62410991 microseconds.
[TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 702 MiB, GPU 1938 MiB
[TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 702 MiB, GPU 1926 MiB
[TRT] Using cublas a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 702, GPU 1926 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +2, now: CPU 702, GPU 1928 (MiB)
[TRT] Total per-runner device memory is 46256128
[TRT] Total per-runner host memory is 128128
[TRT] Allocated activation device memory of size 14261248
[TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 704 MiB, GPU 1925 MiB
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 119
[TRT] – maxBatchSize 1
[TRT] – deviceMemory 14261248
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name ‘Input’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3
– dim #1 300
– dim #2 300
[TRT] binding 1
– index 1
– name ‘NMS’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1
– dim #1 100
– dim #2 7
[TRT] binding 2
– index 2
– name ‘NMS_1’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1
– dim #1 1
– dim #2 1
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT] device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet – maximum bounding boxes: 100
[TRT] detectNet – loaded 91 class info entries
[TRT] detectNet – number of object classes: 91
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse1
[gstreamer] gstreamer changed state from NULL to READY ==> queue0
[gstreamer] gstreamer changed state from NULL to READY ==> qtdemux1
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> qtdemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
(detectnet:30): GStreamer-CRITICAL **: 21:17:45.369: gst_caps_is_empty: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:30): GStreamer-CRITICAL **: 21:17:45.374: gst_caps_truncate: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:30): GStreamer-CRITICAL **: 21:17:45.374: gst_caps_fixate: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:30): GStreamer-CRITICAL **: 21:17:45.374: gst_caps_get_structure: assertion ‘GST_IS_CAPS (caps)’ failed
(detectnet:30): GStreamer-CRITICAL **: 21:17:45.374: gst_structure_get_string: assertion ‘structure != NULL’ failed
(detectnet:30): GStreamer-CRITICAL **: 21:17:45.374: gst_mini_object_unref: assertion ‘mini_object != NULL’ failed
detectnet: failed to capture video frame
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
detectnet: failed to capture video frame
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 960x544 (x 11), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 960, nFrameHeight = 540
detectnet: failed to capture video frame
[gstreamer] gstDecoder – onPreroll()
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message duration-changed ==> h264parse1
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ /\ AVC”, bitrate=(uint)720629;
[gstreamer] gstreamer mysink taglist, encoder=(string)Lavf54.63.104, container-format=(string)“ISO\ MP4/M4A”;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629;
detectnet: failed to capture video frame
[gstreamer] gstDecoder recieve caps: video/x-raw, format=(string)NV12, width=(int)960, height=(int)540, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)jpeg, colorimetry=(string)bt601, framerate=(fraction)30000/1001
[gstreamer] gstDecoder – recieved first frame, codec=h264 format=nv12 width=960 height=540 size=777600
RingBuffer – allocated 4 buffers (777600 bytes each, 3110400 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> queue0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> qtdemux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> filesrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)166633, maximum-bitrate=(uint)166633;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)166633;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1089470;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1291348;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1727472;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1764155;
RingBuffer – allocated 4 buffers (1555200 bytes each, 6220800 bytes total)
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1797482;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)1818821;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)2009670;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)127792, maximum-bitrate=(uint)2079200;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)113646, maximum-bitrate=(uint)2079200;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, bitrate=(uint)720629, minimum-bitrate=(uint)113646, maximum-bitrate=(uint)21430729;
[gstreamer] gstDecoder – end of stream (EOS)
4 objects detected
detected obj 0 class #1 (person) confidence=0.597656
bounding box 0 (89.648438, 350.947266) (151.757812, 441.650391) w=62.109375 h=90.703125
detected obj 1 class #1 (person) confidence=0.835449
bounding box 1 (653.437500, 62.259521) (688.125000, 150.688477) w=34.687500 h=88.428955
detected obj 2 class #1 (person) confidence=0.839844
bounding box 2 (578.437500, 88.330078) (608.437500, 162.553711) w=30.000000 h=74.223633
detected obj 3 class #1 (person) confidence=0.724121
bounding box 3 (644.531250, 61.072998) (673.593750, 130.781250) w=29.062500 h=69.708252
[OpenGL] glDisplay – set the window size to 960x540
[OpenGL] creating 960x540 texture (GL_RGB8 format, 1555200 bytes)
[cuda] registered openGL texture for interop access (960x540, GL_RGB8, 1555200 bytes)
[TRT] ------------------------------------------------
[TRT] Timing Report /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.09203ms CUDA 1.65302ms
[TRT] Network CPU 89986.04688ms CUDA 89927.14062ms
[TRT] Post-Process CPU 159.57614ms CUDA 159.56714ms
[TRT] Visualize CPU 1453.83960ms CUDA 1455.13928ms
[TRT] Total CPU 91599.55469ms CUDA 91543.50781ms
[TRT] ------------------------------------------------
[TRT] note – when processing a single image, run ‘sudo jetson_clocks’ before
to disable DVFS for more accurate profiling/timing measurements
[gstreamer] gstDecoder – end of stream (EOS) has been reached, stream has been closed
detectnet: shutting down…
[gstreamer] gstDecoder – stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstDecoder – pipeline stopped
detectnet: shutdown complete.
OK, does detectnet / detectnet.py work if you try playing a different video file, or does it only not work with pedestrians.mp4?
Tried it on other vids, does not seem to work on them either. Gives same error message that [OpenGL] failed to create X11 Window.
Also on side note I never noticed it before because I guess it is a pop up window that goes away but this time I was staring at computer whole time and it gave a message saying system throttled due to over current. To double check if this was a problem I reran detectnet on images and video viewer. While it doesn’t always give message sometimes it does and it still runs fine for video-viewer and on still images. Even though it still seems to operate for other tests, would that pose a problem for running detectnet on video? To add, that message does not always appear while running detectnet on video either.
Since video-viewer is working, is there a chance the detectnet code I have is different than what I should be using? I just use the one from container but maybe it is suppose to be accessed from somewhere else/updated?
Hmm sorry, I am at a bit of a loss since this problem hasn’t been occurring otherwise. I wonder if it’s somehow related to the larger resolution of your desktop (2560x1920). Can you try running detectnet with --output-width=1280 --output-height=720
?
The code that’s inside the container should be fine, but you can try pulling the latest from the repo and building it from source (and not using container anymore).
So oddest thing. When I adjusted the output frame by the one you suggested. It does now display when I use detectnet (not detectnet.py), but it is not the whole video but a segment (usually last half of it). Any thoughts on that?
Hmm okay…it would seem this could be related somehow to the 2560x1900 display resolution you are using (not sure why). If you change your desktop to 1920x1080, does it work without needing the extra flags to detectnet?
Do you mean it only shows half the frame, or it only plays the last half of the video (temporally)?