Using recorded video data to do object detection

Hi - I was wondering how to load a recorded video data instead of a webcam in the detectnet camera code?

Best,
Zack

Hi,

Please check our sample of TensorRT for detectNet first:

In this sample, it opens camera via GStreamer.
You can change the input type to the video file for your use case:
https://github.com/dusty-nv/jetson-inference/blob/master/util/camera/gstCamera.cpp#L321

Thanks.

Thank you, what should I change bool gstCamera::buildLaunchStr() to?

Hi,

Please check Video Decode Examples Using gst-launch-1.0 session in our GStreamer document:
http://developer2.download.nvidia.com/embedded/L4T/r28_Release_v1.0/Docs/Jetson_TX2_Accelerated_GStreamer_User_Guide.pdf

Thanks.

Hi,

I want to play a video instead of using camera input for Jetson-inference detectnet algorithm.
The video plays for a second and then I am facing segmentation fault error.
This is the changes made in gstCamera.cpp under buildLaunchStr function:

ss << “filesrc location=/home/nvidia/pedestrian.mp4 ! qtdemux name=demux demux.video_0 ! queue ! mpeg4videoparse ! omxmpeg4videodec ! autovideosink name=mysink”;

These are the logs while running the algorithm:

nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ ./detectnet-camera pednet

can0 at index 6
detectnet-camera
args (2):

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
filesrc location=/home/nvidia/pedestrian.mp4 ! qtdemux name=demux demux.video_0 ! queue ! mpeg4videoparse ! omxmpeg4videodec ! autovideosink name=mysink

(detectnet-camera:7762): GLib-GObject-WARNING **: invalid cast from ‘GstAutoVideoSink’ to ‘GstAppSink’

** (detectnet-camera:7762): CRITICAL **: gst_app_sink_set_callbacks: assertion ‘GST_IS_APP_SINK (appsink)’ failed

detectnet-camera: successfully initialized video device
width: 1280
height: 720
depth: 24 (bpp)

detectNet – loading detection network model from:
– prototxt networks/ped-100/deploy.prototxt
– model networks/ped-100/snapshot_iter_70800.caffemodel
– input_blob ‘data’
– output_cvg ‘coverage’
– output_bbox ‘bboxes’
– mean_pixel 0.000000
– threshold 0.500000
– batch_size 2

[GIE] TensorRT version 3.0, build 3000
[GIE] attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE] loading network profile from cache… networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel loaded
[GIE] CUDA engine context initialized with 3 bindings
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel input binding index: 0
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel input dims (b=2 c=3 h=512 w=1024) size=12582912
[cuda] cudaAllocMapped 12582912 bytes, CPU 0x102c00000 GPU 0x102c00000
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage binding index: 1
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage dims (b=2 c=1 h=32 w=64) size=16384
[cuda] cudaAllocMapped 16384 bytes, CPU 0x103800000 GPU 0x103800000
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes binding index: 2
[GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes dims (b=2 c=4 h=32 w=64) size=65536
[cuda] cudaAllocMapped 65536 bytes, CPU 0x103a00000 GPU 0x103a00000
networks/ped-100/snapshot_iter_70800.caffemodel initialized.
[cuda] cudaAllocMapped 16 bytes, CPU 0x102a00200 GPU 0x102a00200
maximum bounding boxes: 8192
[cuda] cudaAllocMapped 131072 bytes, CPU 0x103c00000 GPU 0x103c00000
[cuda] cudaAllocMapped 32768 bytes, CPU 0x103a10000 GPU 0x103a10000
default X screen 0: 1440 x 900
[OpenGL] glDisplay display window initialized
[OpenGL] creating 1280x720 texture
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x103e00000 GPU 0x103e00000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x103804000 GPU 0x103804000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> omxmpeg4videodec-omxmpeg4videodec0
[gstreamer] gstreamer changed state from NULL to READY ==> mpeg4vparse0
[gstreamer] gstreamer changed state from NULL to READY ==> queue0
[gstreamer] gstreamer changed state from NULL to READY ==> demux
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxmpeg4videodec-omxmpeg4videodec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> mpeg4vparse0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> demux
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
NvMMLiteOpen : Block : BlockType = 260
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen
NvMMLiteBlockCreate : Block : BlockType = 260
TVMR: cbBeginSequence: 1223: BeginSequence 1280x720, bVPR = 0
TVMR: LowCorner Frequency = 100000
TVMR: cbBeginSequence: 1622: DecodeBuffers = 3, pnvsi->eCodec = 2, codec = 6
TVMR: cbBeginSequence: 1693: Display Resolution : (1280x720)
TVMR: cbBeginSequence: 1694: Display Aspect Ratio : (1280x720)
TVMR: cbBeginSequence: 1762: ColorFormat : 5
TVMR: cbBeginSequence:1776 ColorSpace = NvColorSpace_YCbCr601
TVMR: cbBeginSequence: 1904: SurfaceLayout = 3
TVMR: cbBeginSequence: 2005: NumOfSurfaces = 10, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 2007: BeginSequence ColorPrimaries = 0, TransferCharacteristics = 0, MatrixCoefficients = 0
Allocating new output: 1280x720 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 1280, nFrameHeight = 720
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer msg duration-changed ==> mpeg4vparse0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer mysink-actual-sink-nvoverlay missing gst_tag_list_to_string()

Segmentation fault (core dumped)

Am I missing something?

Kindly help,
Thanks,
Pratosha

Hi,

Not sure if this issue is caused by incorrect GStreamer command.
Could you verify your pipeline on the command line first?

Thanks.

Hi , has someone figure out a solid solution to this problem? Because I am also struggling with it…
Thanks in advance