Unable to run jetson-inference::detectnet-camera

Hi Folks,

I am trying out dusty_nv’s jetson-inference modules. I am able to build, but first run gives issues…

I have sony IMX274 from Leopard and TX1 carrier board from Leopard. I run running on R28.1/JP3.1.

Please help.
Thanks

nvidia@tegra-ubuntu:~/work/Internet/jetson-inference/jetson-inference/build/aarch64/bin$ ./segnet-camera 
segnet-camera
  args (1):  0 [./segnet-camera]  

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
nvcamerasrc sensor-id=0 fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink

segnet-camera:  successfully initialized video device
    width:  1280
   height:  720
    depth:  12 (bpp)


segNet -- loading segmentation network model from:
       -- prototxt:   networks/FCN-Alexnet-Cityscapes-HD/deploy.prototxt
       -- model:      networks/FCN-Alexnet-Cityscapes-HD/snapshot_iter_367568.caffemodel
       -- labels:     networks/FCN-Alexnet-Cityscapes-HD/cityscapes-labels.txt
       -- colors:     networks/FCN-Alexnet-Cityscapes-HD/cityscapes-deploy-colors.txt
       -- input_blob  'data'
       -- output_blob 'score_fr_21classes'
       -- batch_size  2

[GIE]  TensorRT version 2.1, build 2102
[GIE]  attempting to open cache file networks/FCN-Alexnet-Cityscapes-HD/snapshot_iter_367568.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform has FP16 support.
[GIE]  loading networks/FCN-Alexnet-Cityscapes-HD/deploy.prototxt networks/FCN-Alexnet-Cityscapes-HD/snapshot_iter_367568.caffemodel
[GIE]  retrieved output tensor 'score_fr_21classes'
[GIE]  configuring CUDA engine
[GIE]  building CUDA engine
Killed
nvidia@tegra-ubuntu:~/work/Internet/jetson-inference/jetson-inference/build/aarch64/bin$ ./detectnet-camera 
detectnet-camera
  args (1):  0 [./detectnet-camera]  

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
nvcamerasrc sensor-id=0 fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink

detectnet-camera:  successfully initialized video device
    width:  1280
   height:  720
    depth:  12 (bpp)


detectNet -- loading detection network model from:
          -- prototxt    networks/ped-100/deploy.prototxt
          -- model       networks/ped-100/snapshot_iter_70800.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[GIE]  TensorRT version 2.1, build 2102
[GIE]  attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE]  loading network profile from cache... networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE]  platform has FP16 support.
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel loaded
[GIE]  CUDA engine context initialized with 3 bindings
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel input  binding index:  0
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel input  dims (b=2 c=3 h=512 w=1024) size=12582912
[cuda]  cudaAllocMapped 12582912 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage  binding index:  1
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage  dims (b=2 c=1 h=32 w=64) size=16384
[cuda]  cudaAllocMapped 16384 bytes, CPU 0x1018e0000 GPU 0x1018e0000
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes  binding index:  2
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes  dims (b=2 c=4 h=32 w=64) size=65536
[cuda]  cudaAllocMapped 65536 bytes, CPU 0x1019e0000 GPU 0x1019e0000
networks/ped-100/snapshot_iter_70800.caffemodel initialized.
[cuda]  cudaAllocMapped 16 bytes, CPU 0x101ae0000 GPU 0x101ae0000
maximum bounding boxes:  8192
[cuda]  cudaAllocMapped 131072 bytes, CPU 0x101be0000 GPU 0x101be0000
[cuda]  cudaAllocMapped 32768 bytes, CPU 0x1019f0000 GPU 0x1019f0000
default X screen 0:   3840 x 2160
[OpenGL]  glDisplay display window initialized
[OpenGL]   creating 1280x720 texture
loaded image  fontmapA.png  (256 x 512)  2097152 bytes
[cuda]  cudaAllocMapped 2097152 bytes, CPU 0x101ce0000 GPU 0x101ce0000
[cuda]  cudaAllocMapped 8192 bytes, CPU 0x1018e4000 GPU 0x1018e4000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING

Available Sensor modes : 
3864 x 2174 FR=60.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1932 x 1094 FR=120.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1288 x 734 FR=120.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1288 x 546 FR=240.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvcamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvcamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvcamerasrc0

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 0 WxH = 3864x2174 FrameRate = 60.000000 ...


detectnet-camera:  camera open for streaming
Killed

Hi,

Please help to check if your camera can be opened with GStreamer nvcamerasrc element.

In Jetson_inference, here is the command to open camera. Please check if any alternative is required for your use case.
https://github.com/dusty-nv/jetson-inference/blob/master/util/camera/gstCamera.cpp#L330

Thanks.

Hi AastaLLL

I am able to run camera with following command lines -

  1. gst-launch-1.0 nvcamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080’ ! nvoverlaysink overlay-x=400 overlay-y=600 overlay-w=320 overlay-h=190 overlay=1 nvcamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080’ ! nvoverlaysink overlay-x=400 overlay-y=300 overlay-w=320 overlay-h=190 overlay=2

  2. nvgstcapture-1.0

I do not think this is a camera interface issue. Please help.

Thanks
pankaj

Hi,

I would try using the exact gstreamer pipeline that is used in the code. Maybe you can replace appsink with nvoverlaysink or fakesink.

Based on the logs you posted:

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 0 WxH = 3864x2174 FrameRate = 60.000000 ...

Try modifying the pipeline so that it takes in a lower resolution than 3864x2174. I have had trouble with using a very high-resolution stream from an RTSP camera.

Hi,

Could you check if your camera can be well opened with ./gst-camera application in Jetson_inference?
https://github.com/dusty-nv/jetson-inference/blob/master/CMakeLists.txt#L102

This experiment will narrow down the error is from camera pipeline or libraries.
Thanks.