How to detect in demo movie on jetson tx2?

Hi,
i try demo(mp4) on detectnet.
but don’t know that how to user gst-launch.

Please teach me.

Hi,

You can find this information in our tutorial:

For example, to launch detectNet with on-board camera:

1. Build

$ git clone https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make

2. Run

$ cd aarch64/bin
$ ./detectnet-camera

Thanks.

Thank you, but i know that.
but i want not camera.

I want demo movie file(.mp4).

i can detect camera and picture.

How to detect in demo movie file(.mp4) on jetson tx2?

Thank you, but i know that.
but i want not camera.

I want demo movie file(.mp4).

i can detect camera and picture.

How to detect in demo movie file(.mp4) on jetson tx2?

Hi,

It’s recommended to read our jetson_inference tutorial first:

For an image, you can compile and execute this application, which will open an image with QT.

$ ./detectnet-console dog_1.jpg output_1.jpg coco-dog

For video, you can keep using the detectnet-camera application but change the GStreamer pipeline here:
https://github.com/dusty-nv/jetson-inference/blob/master/util/camera/gstCamera.cpp#L321

You can find the pipeline command for video in this document:
https://developer.download.nvidia.com/embedded/L4T/r28_Release_v1.0/Docs/Jetson_TX2_Accelerated_GStreamer_User_Guide.pdf

Thanks.

Very thank you, AastaLLL

Okay, I understand that can change the GStreamer pipeline.
but, how to write pipline?

i found how to use pipeline and wrote pipelines.
but i saw pipeline error.

so i wrote Topic.

Very good reply.

Thank you.

Hi, AastaLLL

I see pipeline document.
but i can’t load mp4 file.

how to use the gstreamer pipline for mp4?

You may build and prototype your pipeline in shell with gst-launch.
From Ubuntu file explorer, right click on your mp4 file, select Properties, go to tabitem Video and look for container and video codec. For this example I have a file with Quicktime container and video codec H265. With gstreamer you can read it from file, extract the video from the quicktime container (qtdemux), then decode the H265 stream and display:

gst-launch-1.0 filesrc location=Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! videoconvert ! ximagesink

You may adapt for your video file.

When it works, you may try

  1. Dirty way: replace the pipeline at line 338 of gstCamera.cpp:
<s>ss << "nvcamerasrc fpsRange=\"30.0 30.0\" ! video/x-raw(memory:NVMM), width=(int)" << mWidth << ", height=(int)" << mHeight << ", format=(string)NV12 ! nvvidconv flip-method=" << flipMethod << " ! "; </s>
ss << "filesrc location=Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! ";

and recompile.
I have not tested, and I’m unsure about the expected final format…If it doesn’t work you may try
to add one of these caps:

ss << "filesrc location=Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! video/x-raw, format=NV12 ! ";

ss << "filesrc location=Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! video/x-raw(memory:NVMM), format=NV12 ! ";

ss << "filesrc location=Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! video/x-raw(memory:NVMM), format=I420 ! ";
  1. Better way would be to use v4l2loopback for emulating a V4L2 camera. You may install v4l2loopback with this post. It will create a virtual videonode (/dev/video1 if you only have the onboard camera). Then you can feed the loopback with a pipeline from a shell:
gst-launch-1.0 filesrc location=Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! videoconvert ! 'video/x-raw, format=BGR' ! tee ! v4l2sink device=/dev/video1

Then you can see /dev/video1 emulating a camera:

v4l2-ctl -d/dev/video1 --all

So can just change in detectnet-camera.cpp at line 39:

<s>#define DEFAULT_CAMERA -1	// -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0)	</s>
#define DEFAULT_CAMERA 1	// For /dev/video1

Rebuild and try.

[EDIT: I am now far from my Jetson for a few days, I’ve just had time for testing both methods before leaving and none was working. I may retry later if you have not solved it before.]

Update: I have not been able to use v4l2loopback, but the dirty way works. I’ve changed the pipeline at line 338 and 339 of gstCamera.cpp into:

[b]ss << "filesrc location=/home/nvidia/Videos/Tears_400_x265.mp4 ! qtdemux ! h265parse ! omxh265dec ! nvvidconv ! "; 
ss << "video/x-raw, format=NV12, width=1280, height=720 ! appsink name=mysink";[/b]

Thank you Comment. honey_patouceul!

I tried your solution.But i’ve format error.

Shared my error contents!..

===================================================================================================
$ sudo ./detectnet-camera
detectnet-camera
args (1): 0 [./detectnet-camera]

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
filesrc location=/home/nvidia/Desktop/tensorrt/jetson-inference/build/aarch64/bin/test.mp4 ! qtdemux ! h265parse ! omxh265dec ! nvvidconv ! video/x-raw, format=NV12, width=1280, height=720 ! appsink name=mysink

detectnet-camera: successfully initialized video device
width: 1280
height: 720
depth: 12 (bpp)

detectNet – loading detection network model from:
– prototxt networks/ped-100/deploy.prototxt
– model networks/ped-100/snapshot_iter_70800.caffemodel
– input_blob ‘data’
– output_cvg ‘coverage’
– output_bbox ‘bboxes’
– mean_pixel 0.000000
– threshold 0.500000
– batch_size 2

[GIE] TensorRT version 2.1.2, build 2102
[GIE] attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE] loading network profile from cache… networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel loaded
[GIE] CUDA engine context initialized with 3 bindings
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel input binding index: 0
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel input dims (b=2 c=3 h=512 w=1024) size=12582912
[cuda] cudaAllocMapped 12582912 bytes, CPU 0x102a00000 GPU 0x102a00000
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage binding index: 1
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage dims (b=2 c=1 h=32 w=64) size=16384
[cuda] cudaAllocMapped 16384 bytes, CPU 0x103600000 GPU 0x103600000
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes binding index: 2
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes dims (b=2 c=4 h=32 w=64) size=65536
[cuda] cudaAllocMapped 65536 bytes, CPU 0x103800000 GPU 0x103800000
networks/ped-100/snapshot_iter_70800.caffemodel initialized.
[cuda] cudaAllocMapped 16 bytes, CPU 0x103a00000 GPU 0x103a00000
maximum bounding boxes: 8192
[cuda] cudaAllocMapped 131072 bytes, CPU 0x103c00000 GPU 0x103c00000
[cuda] cudaAllocMapped 32768 bytes, CPU 0x103810000 GPU 0x103810000
default X screen 0: 1920 x 1080
[OpenGL] glDisplay display window initialized
[OpenGL] creating 1280x720 texture
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x103e00000 GPU 0x103e00000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x103604000 GPU 0x103604000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> omxh265dec-omxh265dec0
[gstreamer] gstreamer changed state from NULL to READY ==> h265parse0
[gstreamer] gstreamer changed state from NULL to READY ==> qtdemux0
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh265dec-omxh265dec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h265parse0
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> qtdemux0
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
[gstreamer] gstreamer msg warning ==> qtdemux0
[gstreamer] gstreamer qtdemux0 ERROR GStreamer encountered a general stream error.
[gstreamer] gstreamer Debugging info: qtdemux.c(5520): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstQTDemux:qtdemux0:
streaming stopped, reason not-linked

detectnet-camera: camera open for streaming

detectnet-camera: failed to capture frame
detectnet-camera: failed to convert from NV12 to RGBA
detectNet::Detect( 0x(nil), 1280, 720 ) → invalid parameters
[cuda] cudaNormalizeRGBA((float4*)imgRGBA, make_float2(0.0f, 255.0f), (float4*)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight())
[cuda] invalid device pointer (error 17) (hex 0x11)
[cuda] /home/nvidia/Desktop/tensorrt/jetson-inference/detectnet-camera/detectnet-camera.cpp:247
[cuda] registered 14745600 byte openGL texture for interop access (1280x720)

====================================================================================================

Sorry, no idea what happens.
In my case, I’m not running it with sudo, but as nvidia user.
Does your video show Quicktime container and H265 codec from Ubuntu file explorer ?
Does the pipeline work from gst-launch ?

gst-launch-1.0 filesrc location=/home/nvidia/Desktop/tensorrt/jetson-inference/build/aarch64/bin/test.mp4 ! qtdemux ! h265parse ! omxh265dec ! videoconvert ! ximagesink