Jetson Nano Object Detection C/C++ Example

Where can I find an object detection example in C/C++ for the Jetson Nano? I did one of the online courses on deep learning, but the examples were done in python. Any similar courses available in C/C++?

Hi xplanescientist, see the object detection portion of Hello AI World: https://github.com/dusty-nv/jetson-inference/blob/master/README.md#hello-ai-world

1 Like

Thanks @dusty_nv, nice centralized object detection website. That’s what I’m talking about.

1 Like

I followed the object detection portion of the Hello AI World. The examples work, both the image-recognition and live camera-recognition. It runs very fast, and the big heat sink really gets hot.

Eventually I’d like to couple the live camera program with another robotics project. So is it possible to just compile and run the “imagenet-camera.cpp” example program (link below) on it’s own without the elaborate CMake setup?

https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-camera-2.md

I tried compiling from the command line as follows, but the include files were not found. Just want to keep it simple.

gcc -Wall imagenet-camera.cpp -o imagenet-camera.exe

See this section of the tutorial for creating your own simple CMakeLists.txt for building an external project:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-example-2.md#creating-cmakeliststxt

Note that I removed the dependency on Qt4, but forgot to update this part of the tutorial for that, so you can try removing the Qt4 stuff from that step if desired.

Hello, the “my-recognition.cpp” example worked. Straightforward and streamlined with good explanations.

However big trouble with the live camera recognition program (“imagenet-camera.cpp” in link below).
No luck after many hours of troubleshooting.

https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-camera-2.md

Here is the CMakeLists.txt file I cobbled together:

# require CMake 2.8 or greater
cmake_minimum_required(VERSION 2.8)

include_directories(${PROJECT_INCLUDE_DIR} ${PROJECT_INCLUDE_DIR}/jetson-inference ${PROJECT_INCLUDE_DIR}/jetson-utils)
include_directories(/usr/include/gstreamer-1.0 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/include /usr/include/glib-2.0 /usr/include/libxml2 /usr/lib/aarch64-linux-gnu/glib-2.0/include/ /usr/local/include/jetson-utils)

file(GLOB imagenetCameraSources *.cpp)
file(GLOB imagenetCameraIncludes *.h )

find_package(CUDA)

cuda_add_executable(my-camera ${imagenetCameraSources})

target_link_libraries(my-camera jetson-inference)

install(TARGETS my-camera DESTINATION bin)

The “cmake .” step seemed to work:

dlinano@jetson-nano:~/my-camera$ cmake .
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "10.0") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/dlinano/my-camera

However, the final “make” step failed. Need help here. Here is the “make” output:

dlinano@jetson-nano:~/my-camera$ make
Scanning dependencies of target my-camera
[ 50%] Building CXX object CMakeFiles/my-camera.dir/my-camera.cpp.o
/home/dlinano/my-camera/my-camera.cpp: In function ‘int usage()’:
/home/dlinano/my-camera/my-camera.cpp:76:17: error: ‘detectNet’ has not been declared
  printf("%s\n", detectNet::Usage());
                 ^~~~~~~~~
/home/dlinano/my-camera/my-camera.cpp: In function ‘int main(int, char**)’:
/home/dlinano/my-camera/my-camera.cpp:121:2: error: ‘detectNet’ was not declared in this scope
  detectNet* net = detectNet::Create(argc, argv);
  ^~~~~~~~~
/home/dlinano/my-camera/my-camera.cpp:121:13: error: ‘net’ was not declared in this scope
  detectNet* net = detectNet::Create(argc, argv);
             ^~~
/home/dlinano/my-camera/my-camera.cpp:121:13: note: suggested alternative: ‘getw’
  detectNet* net = detectNet::Create(argc, argv);
             ^~~
             getw
/home/dlinano/my-camera/my-camera.cpp:121:19: error: ‘detectNet’ is not a class, namespace, or enumeration
  detectNet* net = detectNet::Create(argc, argv);
                   ^~~~~~~~~
/home/dlinano/my-camera/my-camera.cpp:165:3: error: ‘detectNet’ is not a class, namespace, or enumeration
   detectNet::Detection* detections = NULL;
   ^~~~~~~~~
/home/dlinano/my-camera/my-camera.cpp:165:25: error: ‘detections’ was not declared in this scope
   detectNet::Detection* detections = NULL;
                         ^~~~~~~~~~
/home/dlinano/my-camera/my-camera.cpp:165:25: note: suggested alternative: ‘sigaction’
   detectNet::Detection* detections = NULL;
                         ^~~~~~~~~~
                         sigaction
In file included from /usr/local/include/jetson-utils/glTexture.h:27:0,
                 from /usr/local/include/jetson-utils/glDisplay.h:28,
                 from /home/dlinano/my-camera/my-camera.cpp:42:
/home/dlinano/my-camera/my-camera.cpp:208:14: error: type ‘<type error>’ argument given to ‘delete’, expected pointer
  SAFE_DELETE(net);
              ^
CMakeFiles/my-camera.dir/build.make:62: recipe for target 'CMakeFiles/my-camera.dir/my-camera.cpp.o' failed
make[2]: *** [CMakeFiles/my-camera.dir/my-camera.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/my-camera.dir/all' failed
make[1]: *** [CMakeFiles/my-camera.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
dlinano@jetson-nano:~/my-camera$

Can you post the contents of my-camera.cpp and the results of make VERBOSE=1 ?

It is not finding the definition of detectNet class. Is there an #include “detectNet.h” or #include <jetson-inference/detectNet.h>?

The new CMakeLists also does not have these:

find_package(jetson-utils)
find_package(jetson-inference)

So I put in your suggestions, and it seems to compile. The relative paths are required in the #include statements, otherwise it does not work. I executed the program, and although it takes several minutes to show the live camera feed, it appears to run. But it does not identify the object.

Here is the “my-camera.cpp” source code:

/*
 * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */


#include "jetson-utils/gstCamera.h"
#include "jetson-utils/glDisplay.h"
#include "cudaFont.h"

#include "jetson-inference/imageNet.h"
#include "jetson-utils/commandLine.h"

#include <signal.h>

#include <jetson-inference/detectNet.h>




bool signal_recieved = false;

void sig_handler(int signo)
{
	if( signo == SIGINT )
	{
		printf("received SIGINT\n");
		signal_recieved = true;
	}
}

int usage()
{
	printf("usage: detectnet-camera [-h] [--network NETWORK] [--camera CAMERA]\n");
	printf("                        [--width WIDTH] [--height HEIGHT]\n\n");
	printf("Locate objects in a live camera stream using an object detection DNN.\n\n");
	printf("optional arguments:\n");
	printf("  --help           show this help message and exit\n");
	printf("  --camera CAMERA  index of the MIPI CSI camera to use (NULL for CSI camera 0),\n");
	printf("                   or for VL42 cameras the /dev/video node to use (/dev/video0).\n");
     printf("                   by default, MIPI CSI camera 0 will be used.\n");
	printf("  --width WIDTH    desired width of camera stream (default is 1280 pixels)\n");
	printf("  --height HEIGHT  desired height of camera stream (default is 720 pixels)\n\n");
	printf("%s\n", detectNet::Usage());

	return 0;
}

int main( int argc, char** argv )
{
	/*
	 * parse command line
	 */
	commandLine cmdLine(argc, argv);

	if( cmdLine.GetFlag("help") )
		return usage();


	/*
	 * attach signal handler
	 */
	if( signal(SIGINT, sig_handler) == SIG_ERR )
		printf("\ncan't catch SIGINT\n");


	/*
	 * create the camera device
	 */
	gstCamera* camera = gstCamera::Create(cmdLine.GetInt("width", gstCamera::DefaultWidth),
								   cmdLine.GetInt("height", gstCamera::DefaultHeight),
								   cmdLine.GetString("camera"));

	if( !camera )
	{
		printf("\ndetectnet-camera:  failed to initialize camera device\n");
		return 0;
	}
	
	printf("\ndetectnet-camera:  successfully initialized camera device\n");
	printf("    width:  %u\n", camera->GetWidth());
	printf("   height:  %u\n", camera->GetHeight());
	printf("    depth:  %u (bpp)\n\n", camera->GetPixelDepth());
	

	/*
	 * create detection network
	 */
	detectNet* net = detectNet::Create(argc, argv);
	
	if( !net )
	{
		printf("detectnet-camera:   failed to load detectNet model\n");
		return 0;
	}


	/*
	 * create openGL window
	 */
	glDisplay* display = glDisplay::Create();

	if( !display ) 
		printf("detectnet-camera:  failed to create openGL display\n");


	/*
	 * start streaming
	 */
	if( !camera->Open() )
	{
		printf("detectnet-camera:  failed to open camera for streaming\n");
		return 0;
	}
	
	printf("detectnet-camera:  camera open for streaming\n");
	
	
	/*
	 * processing loop
	 */
	float confidence = 0.0f;
	
	while( !signal_recieved )
	{
		// capture RGBA image
		float* imgRGBA = NULL;
		
		if( !camera->CaptureRGBA(&imgRGBA, 1000) )
			printf("detectnet-camera:  failed to capture RGBA image from camera\n");

		// detect objects in the frame
		detectNet::Detection* detections = NULL;
	
		const int numDetections = net->Detect(imgRGBA, camera->GetWidth(), camera->GetHeight(), &detections);
		
		if( numDetections > 0 )
		{
			printf("%i objects detected\n", numDetections);
		
			for( int n=0; n < numDetections; n++ )
			{
				printf("detected obj %i  class #%u (%s)  confidence=%f\n", n, detections[n].ClassID, net->GetClassDesc(detections[n].ClassID), detections[n].Confidence);
				printf("bounding box %i  (%f, %f)  (%f, %f)  w=%f  h=%f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height()); 
			}
		}	

		// update display
		if( display != NULL )
		{
			// render the image
			display->RenderOnce(imgRGBA, camera->GetWidth(), camera->GetHeight());

			// update the status bar
			char str[256];
			sprintf(str, "TensorRT %i.%i.%i | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, precisionTypeToStr(net->GetPrecision()), 1000.0f / net->GetNetworkTime());
			display->SetTitle(str);

			// check if the user quit
			if( display->IsClosed() )
				signal_recieved = true;
		}

		// print out timing info
		net->PrintProfilerTimes();
	}
	

	/*
	 * destroy resources
	 */
	printf("detectnet-camera:  shutting down...\n");
	
	SAFE_DELETE(camera);
	SAFE_DELETE(display);
	SAFE_DELETE(net);

	printf("detectnet-camera:  shutdown complete.\n");
	return 0;
}

Here are the cmake and make VERBOSE=1 steps:

dlinano@jetson-nano:~/my-camera$ cmake .
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "10.0") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/dlinano/my-camera

dlinano@jetson-nano:~/my-camera$ make VERBOSE=1
/usr/bin/cmake -H/home/dlinano/my-camera -B/home/dlinano/my-camera --check-build-system CMakeFiles/Makefile.cmake 0
/usr/bin/cmake -E cmake_progress_start /home/dlinano/my-camera/CMakeFiles /home/dlinano/my-camera/CMakeFiles/progress.marks
make -f CMakeFiles/Makefile2 all
make[1]: Entering directory '/home/dlinano/my-camera'
make -f CMakeFiles/my-camera.dir/build.make CMakeFiles/my-camera.dir/depend
make[2]: Entering directory '/home/dlinano/my-camera'
cd /home/dlinano/my-camera && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/dlinano/my-camera /home/dlinano/my-camera /home/dlinano/my-camera /home/dlinano/my-camera /home/dlinano/my-camera/CMakeFiles/my-camera.dir/DependInfo.cmake --color=
Dependee "/home/dlinano/my-camera/CMakeFiles/my-camera.dir/DependInfo.cmake" is newer than depender "/home/dlinano/my-camera/CMakeFiles/my-camera.dir/depend.internal".
Dependee "/home/dlinano/my-camera/CMakeFiles/CMakeDirectoryInformation.cmake" is newer than depender "/home/dlinano/my-camera/CMakeFiles/my-camera.dir/depend.internal".
Scanning dependencies of target my-camera
make[2]: Leaving directory '/home/dlinano/my-camera'
make -f CMakeFiles/my-camera.dir/build.make CMakeFiles/my-camera.dir/build
make[2]: Entering directory '/home/dlinano/my-camera'
[ 50%] Building CXX object CMakeFiles/my-camera.dir/my-camera.cpp.o
/usr/bin/c++   -I/jetson-inference -I/jetson-utils -I/usr/include/gstreamer-1.0 -I/usr/lib/aarch64-linux-gnu/gstreamer-1.0/include -I/usr/include/glib-2.0 -I/usr/include/libxml2 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -I/usr/local/include/jetson-utils -I/usr/local/cuda/include   -o CMakeFiles/my-camera.dir/my-camera.cpp.o -c /home/dlinano/my-camera/my-camera.cpp
[100%] Linking CXX executable my-camera
/usr/bin/cmake -E cmake_link_script CMakeFiles/my-camera.dir/link.txt --verbose=1
/usr/bin/c++    -rdynamic CMakeFiles/my-camera.dir/my-camera.cpp.o  -o my-camera -Wl,-rpath,/usr/local/lib: /usr/local/cuda/lib64/libcudart_static.a -lpthread -ldl -lrt /usr/local/lib/libjetson-inference.so /usr/local/lib/libjetson-utils.so /usr/local/cuda/lib64/libcudart_static.a -lpthread -ldl -lrt -lGL -lGLEW -lgstreamer-1.0 -lgstapp-1.0 -lnvinfer -lnvinfer_plugin -lnvcaffe_parser -lnvonnxparser -lopencv_core -lopencv_calib3d 
make[2]: Leaving directory '/home/dlinano/my-camera'
[100%] Built target my-camera
make[1]: Leaving directory '/home/dlinano/my-camera'
/usr/bin/cmake -E cmake_progress_start /home/dlinano/my-camera/CMakeFiles 0
dlinano@jetson-nano:~/my-camera$

This is what happens when I run it:

dlinano@jetson-nano:~/my-camera$ ./my-camera --width=640 --height=480 
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0

detectnet-camera:  successfully initialized camera device
    width:  640
   height:  480
    depth:  12 (bpp)


detectNet -- loading detection network model from:
          -- prototxt     networks/ped-100/deploy.prototxt
          -- model        networks/ped-100/snapshot_iter_70800.caffemodel
          -- input_blob   'data'
          -- output_cvg   'coverage'
          -- output_bbox  'bboxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels networks/ped-100/class_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]   TensorRT version 5.0.6
[TRT]   loading NVIDIA plugins...
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - caffe  (extension '.caffemodel')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT]   cache file not found, profiling network model on device GPU
[TRT]   device GPU, loading /usr/local/bin/networks/ped-100/deploy.prototxt /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT]   retrieved Output tensor "coverage":  1x32x64
[TRT]   retrieved Output tensor "bboxes":  4x32x64
[TRT]   retrieved Input tensor "data":  3x512x1024
[TRT]   device GPU, configuring CUDA engine
[TRT]   device GPU, building FP16:  ON
[TRT]   device GPU, building INT8:  OFF
[TRT]   device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
[TRT]   device GPU, completed building CUDA engine
[TRT]   network profiling complete, writing engine cache to /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT]   device GPU, completed writing engine cache to /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT]   device GPU, /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel loaded
[TRT]   device GPU, CUDA engine context initialized with 3 bindings
[TRT]   binding -- index   0
               -- name    'data'
               -- type    FP32
               -- in/out  INPUT
               -- # dims  3
               -- dim #0  3 (CHANNEL)
               -- dim #1  512 (SPATIAL)
               -- dim #2  1024 (SPATIAL)
[TRT]   binding -- index   1
               -- name    'coverage'
               -- type    FP32
               -- in/out  OUTPUT
               -- # dims  3
               -- dim #0  1 (CHANNEL)
               -- dim #1  32 (SPATIAL)
               -- dim #2  64 (SPATIAL)
[TRT]   binding -- index   2
               -- name    'bboxes'
               -- type    FP32
               -- in/out  OUTPUT
               -- # dims  3
               -- dim #0  4 (CHANNEL)
               -- dim #1  32 (SPATIAL)
               -- dim #2  64 (SPATIAL)
[TRT]   binding to input 0 data  binding index:  0
[TRT]   binding to input 0 data  dims (b=1 c=3 h=512 w=1024) size=6291456
[TRT]   binding to output 0 coverage  binding index:  1
[TRT]   binding to output 0 coverage  dims (b=1 c=1 h=32 w=64) size=8192
[TRT]   binding to output 1 bboxes  binding index:  2
[TRT]   binding to output 1 bboxes  dims (b=1 c=4 h=32 w=64) size=32768
device GPU, /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel initialized.
detectNet -- number object classes:   1
detectNet -- maximum bounding boxes:  2048
detectNet -- loaded 1 class info entries
detectNet -- number of object classes:  1
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
[gstreamer] gstreamer msg stream-start ==> pipeline0
detectnet-camera:  camera open for streaming
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3280 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3280 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 4 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: PowerService: requested_clock_Hz=2016000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera onPreroll
[gstreamer] gstCamera -- allocated 16 ringbuffers, 460800 bytes each
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer msg async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstCamera -- allocated 16 RGBA ringbuffers
[OpenGL]   creating 640x480 texture
[cuda]   registered 4915200 byte openGL texture for interop access (640x480)

[TRT]   ----------------------------------------------
[TRT]   Timing Report /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT]   ----------------------------------------------
[TRT]   Pre-Process   CPU  0.17928ms  CUDA 23.41026ms
[TRT]   Network       CPU 196.43784ms  CUDA 170.62610ms
[TRT]   Post-Process  CPU  0.38522ms  CUDA  0.38526ms
[TRT]   Total         CPU 197.00233ms  CUDA 194.42162ms
[TRT]   ----------------------------------------------

[TRT]   note -- when processing a single image, run 'sudo jetson_clocks' before
                to disable DVFS for more accurate profiling/timing measurements


[TRT]   ----------------------------------------------
[TRT]   Timing Report /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT]   ----------------------------------------------
[TRT]   Pre-Process   CPU  0.05755ms  CUDA  3.10344ms
[TRT]   Network       CPU 162.77977ms  CUDA 156.92213ms
[TRT]   Post-Process  CPU  0.36689ms  CUDA  0.36609ms
[TRT]   Total         CPU 163.20421ms  CUDA 160.39166ms
[TRT]   ----------------------------------------------


[TRT]   ----------------------------------------------
[TRT]   Timing Report /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT]   ----------------------------------------------
[TRT]   Pre-Process   CPU  0.07578ms  CUDA  1.69823ms
[TRT]   Network       CPU 130.47484ms  CUDA 126.30542ms
[TRT]   Post-Process  CPU  0.39606ms  CUDA  0.39682ms
[TRT]   Total         CPU 130.94669ms  CUDA 128.40047ms
[TRT]   ----------------------------------------------


[TRT]   ----------------------------------------------
[TRT]   Timing Report /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT]   ----------------------------------------------
[TRT]   Pre-Process   CPU  0.11370ms  CUDA  1.64562ms
[TRT]   Network       CPU 129.87694ms  CUDA 125.66209ms
[TRT]   Post-Process  CPU  0.38600ms  CUDA  0.38531ms
[TRT]   Total         CPU 130.37665ms  CUDA 127.69302ms
[TRT]   ----------------------------------------------

Can you try launching with --threshold=0.2 or trying facenet or ssd-mobilenet-v2 to confirm that the program is working? Does detectnet-camera with this model work for you while your program does not?

Okay, all works just fine, I was mixing up inference methods.

I successfully compiled and ran an (1) Image Recognition example and (2) Object Detection.

Image Recognition working example:

/*
 * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */


#include "jetson-utils/gstCamera.h"
#include "jetson-utils/glDisplay.h"
#include "cudaFont.h"

#include "jetson-inference/imageNet.h"
#include "jetson-utils/commandLine.h"

#include <signal.h>

#include <jetson-inference/detectNet.h>


bool signal_recieved = false;

void sig_handler(int signo)
{
	if( signo == SIGINT )
	{
		printf("received SIGINT\n");
		signal_recieved = true;
	}
}

int usage()
{
	printf("usage: imagenet-camera [-h] [--network NETWORK] [--camera CAMERA]\n");
	printf("                       [--width WIDTH] [--height HEIGHT]\n\n");
	printf("Classify a live camera stream using an image recognition DNN.\n\n");
	printf("optional arguments:\n");
	printf("  --help           show this help message and exit\n");
	printf("  --camera CAMERA  index of the MIPI CSI camera to use (NULL for CSI camera 0),\n");
	printf("                   or for VL42 cameras the /dev/video node to use (/dev/video0).\n");
     printf("                   by default, MIPI CSI camera 0 will be used.\n");
	printf("  --width WIDTH    desired width of camera stream (default is 1280 pixels)\n");
	printf("  --height HEIGHT  desired height of camera stream (default is 720 pixels)\n\n");
	printf("%s\n", imageNet::Usage());

	return 0;
}

int main( int argc, char** argv )
{
	/*
	 * parse command line
	 */
	commandLine cmdLine(argc, argv);

	if( cmdLine.GetFlag("help") )
		return usage();

	
	/*
	 * attach signal handler
	 */
	if( signal(SIGINT, sig_handler) == SIG_ERR )
		printf("\ncan't catch SIGINT\n");


	/*
	 * create the camera device
	 */
	gstCamera* camera = gstCamera::Create(cmdLine.GetInt("width", gstCamera::DefaultWidth),
								   cmdLine.GetInt("height", gstCamera::DefaultHeight),
								   cmdLine.GetString("camera"));
	
	if( !camera )
	{
		printf("\nimagenet-camera:  failed to initialize camera device\n");
		return 0;
	}
	
	printf("\nimagenet-camera:  successfully initialized camera device\n");
	printf("    width:  %u\n", camera->GetWidth());
	printf("   height:  %u\n", camera->GetHeight());
	printf("    depth:  %u (bpp)\n\n", camera->GetPixelDepth());
	

	/*
	 * create recognition network
	 */
	imageNet* net = imageNet::Create(argc, argv);
	
	if( !net )
	{
		printf("imagenet-console:   failed to initialize imageNet\n");
		return 0;
	}


	/*
	 * create display window and overlay font
	 */
	glDisplay* display = glDisplay::Create();
	cudaFont*  font    = cudaFont::Create();
	

	/*
	 * start streaming
	 */
	if( !camera->Open() )
	{
		printf("\nimagenet-camera:  failed to open camera for streaming\n");
		return 0;
	}
	
	printf("\nimagenet-camera:  camera open for streaming\n");
	
	
	/*
	 * processing loop
	 */
	float confidence = 0.0f;
	
	while( !signal_recieved )
	{
		float* imgRGBA = NULL;
		
		// get the latest frame
		if( !camera->CaptureRGBA(&imgRGBA, 1000) )
			printf("\nimagenet-camera:  failed to capture frame\n");

		// classify image
		const int img_class = net->Classify(imgRGBA, camera->GetWidth(), camera->GetHeight(), &confidence);
	
		if( img_class >= 0 )
		{
			printf("imagenet-camera:  %2.5f%% class #%i (%s)\n", confidence * 100.0f, img_class, net->GetClassDesc(img_class));	

			if( font != NULL )
			{
				char str[256];
				sprintf(str, "%05.2f%% %s", confidence * 100.0f, net->GetClassDesc(img_class));
	
				font->OverlayText((float4*)imgRGBA, camera->GetWidth(), camera->GetHeight(),
						        str, 5, 5, make_float4(255, 255, 255, 255), make_float4(0, 0, 0, 100));
			}
		}	

		// update display
		if( display != NULL )
		{
			display->RenderOnce((float*)imgRGBA, camera->GetWidth(), camera->GetHeight());

			// update status bar
			char str[256];
			sprintf(str, "TensorRT %i.%i.%i | %s | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, net->GetNetworkName(), precisionTypeToStr(net->GetPrecision()), 1000.0f / net->GetNetworkTime());
			display->SetTitle(str);	

			// check if the user quit
			if( display->IsClosed() )
				signal_recieved = true;
		}

		net->PrintProfilerTimes();
	}
	
	
	/*
	 * destroy resources
	 */
	printf("imagenet-camera:  shutting down...\n");
	
	SAFE_DELETE(camera);
	SAFE_DELETE(display);
	SAFE_DELETE(net);
	
	printf("imagenet-camera:  shutdown complete.\n");
	return 0;
}

Object Detection working example:

/*
 * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include "jetson-utils/gstCamera.h"
#include "jetson-utils/glDisplay.h"
#include "cudaFont.h"

#include "jetson-inference/imageNet.h"
#include "jetson-utils/commandLine.h"

#include <signal.h>

#include <jetson-inference/detectNet.h>


bool signal_recieved = false;

void sig_handler(int signo)
{
	if( signo == SIGINT )
	{
		printf("received SIGINT\n");
		signal_recieved = true;
	}
}

int usage()
{
	printf("usage: detectnet-camera [-h] [--network NETWORK] [--camera CAMERA]\n");
	printf("                        [--width WIDTH] [--height HEIGHT]\n\n");
	printf("Locate objects in a live camera stream using an object detection DNN.\n\n");
	printf("optional arguments:\n");
	printf("  --help           show this help message and exit\n");
	printf("  --camera CAMERA  index of the MIPI CSI camera to use (NULL for CSI camera 0),\n");
	printf("                   or for VL42 cameras the /dev/video node to use (/dev/video0).\n");
     printf("                   by default, MIPI CSI camera 0 will be used.\n");
	printf("  --width WIDTH    desired width of camera stream (default is 1280 pixels)\n");
	printf("  --height HEIGHT  desired height of camera stream (default is 720 pixels)\n\n");
	printf("%s\n", detectNet::Usage());

	return 0;
}

int main( int argc, char** argv )
{
	/*
	 * parse command line
	 */
	commandLine cmdLine(argc, argv);

	if( cmdLine.GetFlag("help") )
		return usage();


	/*
	 * attach signal handler
	 */
	if( signal(SIGINT, sig_handler) == SIG_ERR )
		printf("\ncan't catch SIGINT\n");


	/*
	 * create the camera device
	 */
	gstCamera* camera = gstCamera::Create(cmdLine.GetInt("width", gstCamera::DefaultWidth),
								   cmdLine.GetInt("height", gstCamera::DefaultHeight),
								   cmdLine.GetString("camera"));

	if( !camera )
	{
		printf("\ndetectnet-camera:  failed to initialize camera device\n");
		return 0;
	}
	
	printf("\ndetectnet-camera:  successfully initialized camera device\n");
	printf("    width:  %u\n", camera->GetWidth());
	printf("   height:  %u\n", camera->GetHeight());
	printf("    depth:  %u (bpp)\n\n", camera->GetPixelDepth());
	

	/*
	 * create detection network
	 */
	detectNet* net = detectNet::Create(argc, argv);
	
	if( !net )
	{
		printf("detectnet-camera:   failed to load detectNet model\n");
		return 0;
	}


	/*
	 * create openGL window
	 */
	glDisplay* display = glDisplay::Create();

	if( !display ) 
		printf("detectnet-camera:  failed to create openGL display\n");


	/*
	 * start streaming
	 */
	if( !camera->Open() )
	{
		printf("detectnet-camera:  failed to open camera for streaming\n");
		return 0;
	}
	
	printf("detectnet-camera:  camera open for streaming\n");
	
	
	/*
	 * processing loop
	 */
	float confidence = 0.0f;
	
	while( !signal_recieved )
	{
		// capture RGBA image
		float* imgRGBA = NULL;
		
		if( !camera->CaptureRGBA(&imgRGBA, 1000) )
			printf("detectnet-camera:  failed to capture RGBA image from camera\n");

		// detect objects in the frame
		detectNet::Detection* detections = NULL;
	
		const int numDetections = net->Detect(imgRGBA, camera->GetWidth(), camera->GetHeight(), &detections);
		
		if( numDetections > 0 )
		{
			printf("%i objects detected\n", numDetections);
		
			for( int n=0; n < numDetections; n++ )
			{
				printf("detected obj %i  class #%u (%s)  confidence=%f\n", n, detections[n].ClassID, net->GetClassDesc(detections[n].ClassID), detections[n].Confidence);
				printf("bounding box %i  (%f, %f)  (%f, %f)  w=%f  h=%f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height()); 
			}
		}	

		// update display
		if( display != NULL )
		{
			// render the image
			display->RenderOnce(imgRGBA, camera->GetWidth(), camera->GetHeight());

			// update the status bar
			char str[256];
			sprintf(str, "TensorRT %i.%i.%i | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, precisionTypeToStr(net->GetPrecision()), 1000.0f / net->GetNetworkTime());
			display->SetTitle(str);

			// check if the user quit
			if( display->IsClosed() )
				signal_recieved = true;
		}

		// print out timing info
		net->PrintProfilerTimes();
	}
	

	/*
	 * destroy resources
	 */
	printf("detectnet-camera:  shutting down...\n");
	
	SAFE_DELETE(camera);
	SAFE_DELETE(display);
	SAFE_DELETE(net);

	printf("detectnet-camera:  shutdown complete.\n");
	return 0;
}

Here is a working CMakeLists.txt file for Object Detection. Change the *.cpp name for your application:

# require CMake 2.8 or greater
cmake_minimum_required(VERSION 2.8)


find_package(jetson-utils)
find_package(jetson-inference)

include_directories(${PROJECT_INCLUDE_DIR} ${PROJECT_INCLUDE_DIR}/jetson-inference ${PROJECT_INCLUDE_DIR}/jetson-utils)
include_directories(/usr/include/gstreamer-1.0 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/include /usr/include/glib-2.0 /usr/include/libxml2 /usr/lib/aarch64-linux-gnu/glib-2.0/include/ /usr/local/include/jetson-utils)


file(GLOB imagenetCameraSources *.cpp)
file(GLOB imagenetCameraIncludes *.h )

find_package(CUDA)

cuda_add_executable(my-camera-detect ${imagenetCameraSources})

target_link_libraries(my-camera-detect jetson-inference)

install(TARGETS my-camera-detect DESTINATION bin)

Also tried the Semantic Segmentation live camera feed C++ example from link below. Used similar CMakeLists.txt file from previous thread. Segmentation runs, but execution is very slow - only 2 fps. I fed the camera 4K video from youtube and it detected nothing, even if I paused the video.

https://github.com/dusty-nv/jetson-inference/blob/master/examples/segnet-camera/segnet-camera.cpp

I downloaded ALL the pretrained models, but only a few worked.

Is Segmentation meant for the Jetson Nano?

I have been working on new segmentation models based on FCN-Resnet18 with improved performance - they currently live in the ‘pytorch’ branch. See here for updated documentation: https://github.com/dusty-nv/jetson-inference/blob/pytorch/docs/segnet-console-2.md

I am updating the documentation for segnet-camera today and am going to merge this branch into master soon.

Thank you very much. I am following your changes in the Pytorch branch with great interest. I would like to install the nano with semantic segmentation on a multicopter.
Best regards,
Wilhelm

OK folks, the pytorch dev branch with the new segmentation models and Python bindings for segNet have been merged into master.

The docs have been updated, see here:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md
https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-camera-2.md

Let me know if you encounter any issues with using the updated master branch, thanks.

Got an error compiling the segment-camera.cpp example from here:

https://github.com/dusty-nv/jetson-inference/blob/master/examples/segnet-camera/segnet-camera.cpp

I rebuilt everything from scratch, especially to get the new Segmentation models.

The “cmake .” process seemed to work:

dlinano@jetson-nano:~/my-camera-seg2$ cmake .
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "10.0") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/dlinano/my-camera-seg2
dlinano@jetson-nano:~/my-camera-seg2$

However the “make” process failed:

dlinano@jetson-nano:~/my-camera-seg2$ make
Scanning dependencies of target my-camera-seg
[ 50%] Building CXX object CMakeFiles/my-camera-seg.dir/my-camera-seg2.cpp.o
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp: In function ‘int usage()’:
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:84:25: error: ‘Usage’ is not a member of ‘segNet’
  printf("%s\n", segNet::Usage());
                         ^~~~~
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp: In function ‘int main(int, char**)’:
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:141:7: error: ‘class segNet’ has no member named ‘SetOverlayAlpha’; did you mean ‘SetGlobalAlpha’?
  net->SetOverlayAlpha(cmdLine.GetFloat("alpha", 120.0f));
       ^~~~~~~~~~~~~~~
       SetGlobalAlpha
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:156:75: error: invalid conversion from ‘long unsigned int’ to ‘void**’ [-fpermissive]
  if( !cudaAllocMapped((void**)&imgOverlay, width * height * sizeof(float) * 4) )
                                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:156:78: error: too few arguments to function ‘bool cudaAllocMapped(void**, void**, size_t)’
  if( !cudaAllocMapped((void**)&imgOverlay, width * height * sizeof(float) * 4) )
                                                                              ^
In file included from /home/dlinano/my-camera-seg2/my-camera-seg2.cpp:44:0:
/usr/local/include/jetson-utils/cudaMappedMemory.h:34:13: note: declared here
 inline bool cudaAllocMapped( void** cpuPtr, void** gpuPtr, size_t size )
             ^~~~~~~~~~~~~~~
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:162:76: error: invalid conversion from ‘long unsigned int’ to ‘void**’ [-fpermissive]
  if( !cudaAllocMapped((void**)&imgMask, width/2 * height/2 * sizeof(float) * 4) )
                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:162:79: error: too few arguments to function ‘bool cudaAllocMapped(void**, void**, size_t)’
  if( !cudaAllocMapped((void**)&imgMask, width/2 * height/2 * sizeof(float) * 4) )
                                                                               ^
In file included from /home/dlinano/my-camera-seg2/my-camera-seg2.cpp:44:0:
/usr/local/include/jetson-utils/cudaMappedMemory.h:34:13: note: declared here
 inline bool cudaAllocMapped( void** cpuPtr, void** gpuPtr, size_t size )
             ^~~~~~~~~~~~~~~
/home/dlinano/my-camera-seg2/my-camera-seg2.cpp:236:193: error: ‘class segNet’ has no member named ‘GetNetworkFPS’; did you mean ‘GetNetworkType’?
 rintf(str, "TensorRT %i.%i.%i | %s | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, net->GetNetworkName(), precisionTypeToStr(net->GetPrecision()), net->GetNetworkFPS());
                                                                                                                                                                                            ^~~~~~~~~~~~~
                                                                                                                                                                                                 GetNetworkType
CMakeFiles/my-camera-seg.dir/build.make:62: recipe for target 'CMakeFiles/my-camera-seg.dir/my-camera-seg2.cpp.o' failed
make[2]: *** [CMakeFiles/my-camera-seg.dir/my-camera-seg2.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/my-camera-seg.dir/all' failed
make[1]: *** [CMakeFiles/my-camera-seg.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
dlinano@jetson-nano:~/my-camera-seg2$

It looks like you are building a separate application. Did you ‘sudo make install’ the new jetson-inference first? Your program appears to be finding the old headers.

Yes, I started from scratch. I first deleted the jetson-inference folder, then followed these steps:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md

I following the steps starting from my home directory, not the bottom most directory.

Does the jetson-inference project compile ok, but not your independent project? The compilation errors you posted have to do with not finding new functions that were added to segNet, so the old headers might still be in /usr/local/include. Try removing them and copying the updated ones over again with sudo make install:

$ cd /usr/local/include
$ sudo rm -r -f jetson-inference
$ sudo rm -r -f jetson-utils
$ cd <jetson-inference>/build
$ cmake ../
$ make
$ sudo make install
$ sudo ldconfig

Okay, I missed steps 7 & 8. Going through it all, my independent project compiles correctly and the Segmentation live camera test works. I tried most of the Segnet networks and they all function properly. The Jetson Nano gets very very hot.

I was a bit thrown off by the installation. The “inference” material gets installed in my home directory but the headers in /usr/local/include

Thank you for the HELLO AI WORLD github documentation. It was very helpful. I can build from this. I cannot understate how helpful it would be to do the same treatment for Direct Register Access GPIO, I2C, SPI, PWM, etc.

Been running C++ object Detection based on the Hello AI World project (link below) using an RPI v2 camera. All works fine. My question is on FPS performance.

https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-camera-2.md

The benchmarks tests below indicate that 39 FPS can be reached using the SSD MobileNet-v2
network at 300×300. I tried changing the camera width x height from the default down to 640x480, but the best performance is ~19 FPS. It’s as if the frame rate is hardcoded somewhere but I cannot find it. I tried running 300x300, but I get an “(Arugs) Error Timeout: (propagating from …SocketClientDispatch.cpp etc.”.

https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/

How can I achieve the advertised 39 FPS Detection using C++ with the SSD MobileNet-v2 network using an RPI v2 camera?