Cannot run imagenet-camera example

I am attempting to run the imagenet-camera example code from here: https://github.com/dusty-nv/jetson-inference/tree/master/examples/imagenet-camera. However I get the following error when I try to run make:

[ 50%] Building CXX object CMakeFiles/imagenet-camera.dir/imagenet-camera.cpp.o
/media/agribrink/Storage/develop/jetson-inference/examples/imagenet-camera/imagenet-camera.cpp:23:10: fatal error: gstCamera.h: No such file or directory
#include “gstCamera.h”
^~~~~~~~~~~~~
compilation terminated.
CMakeFiles/imagenet-camera.dir/build.make:62: recipe for target ‘CMakeFiles/imagenet-camera.dir/imagenet-camera.cpp.o’ failed
make[2]: *** [CMakeFiles/imagenet-camera.dir/imagenet-camera.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target ‘CMakeFiles/imagenet-camera.dir/all’ failed
make[1]: *** [CMakeFiles/imagenet-camera.dir/all] Error 2
Makefile:130: recipe for target ‘all’ failed
make: *** [all] Error 2

If I use “make -L //jetson-inference/build/aarch64/include” it seems like it will at least build it, but then attempting to run it with “imagenet-camera --model=//jetson-inference/python/training/imagenet/RoadDetection” earns me the following error:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0

imagenet-camera: successfully initialized camera device
width: 1280
height: 720
depth: 12 (bpp)

[TRT] imageNet – failed to initialize.
imagenet-console: failed to initialize imageNet

My goal is to eventually create an application which uses the camera to classify an image and return a status. But to begin developing this I need the example to at least compile and run.

Hi gstewart, are you trying to build just imagenet-camera, or the whole project? The include dir should be set in the master CMakeLists.txt

Regarding loading your custom model, it appears that you specified a directory as opposed to a path to the model file (like a .caffemodel or .onnx)

I was trying to just build the camera, because it is only the cameras code I am interested in. I just want to be able to build it and run it.

I will be sure to watch the location of the custom model the next time I build it.

The CMakeLists of imagenet-camera isn’t meant to be compiled independently of the jetson-inference project. See my-recognition or the ‘Coding your own Image Recognition Program (C++)’ step of the tutorial for an example external CMakeLists.txt

Oh I see. Yes that example compiles properly. I suppose I can just make the modification to that file. That is very good information to know.
In regards to the custom model, I would like to specify the custom model in the code. I imagine I must modify the line “imageNet* net = imageNet::Create(imageNet::GOOGLENET);” and replace the imagenet::GOOGLENET part with a path to the onnx file of my custom model.

See this overload of imageNet::Create() function to use for loading a custom model from code:

https://github.com/dusty-nv/jetson-inference/blob/b7757bf7a48ffa849b21a3b0824eb2c327edee5b/c/imageNet.h#L112

For ONNX model, pass NULL for the prototxt_path and NULL for mean_binary. The class_labels parameter should be set to the path of your class labels file. The input and output parameters should be the names of the input/output layers in your ONNX model (e.g. “input_0” and “output_0”)

imageNet* net = imageNet::Create(NULL, "/your/path/your_model.onnx", 
                                 NULL, "/your/path/your_classes.txt", 
                                 "input_0", "output_0");

I might have answered my own question. It seems there is an overloaded version of the create function intended for custom models. It looks like this:
/**
* Load a new network instance
* @param prototxt_path File path to the deployable network prototxt
* @param model_path File path to the caffemodel
* @param mean_binary File path to the mean value binary proto (can be NULL)
* @param class_labels File path to list of class name labels
* @param input Name of the input layer blob.
* @param output Name of the output layer blob.
* @param maxBatchSize The maximum batch size that the network will support and be optimized for.
/
static imageNet
Create( const char* prototxt_path, const char* model_path,
const char* mean_binary, const char* class_labels,
const char* input=IMAGENET_DEFAULT_INPUT,
const char* output=IMAGENET_DEFAULT_OUTPUT,
uint32_t maxBatchSize=DEFAULT_MAX_BATCH_SIZE,
precisionType precision=TYPE_FASTEST,
deviceType device=DEVICE_GPU, bool allowGPUFallback=true );

I believe I have managed to use it somewhat properly, I used the “/jetson-inference/data/networks/ResNet-18/deploy.prototxt” for the first parameter, my custom onnx file for the second parameter, I used NULL for the mean_binary, the labels.txt file from the dataset is the fourth parameter, the input and output layer blobs are input_0 and output_0 respectively and the max batch size I just selected 300.
If any of these parameters are incorrect please let me know. The model accuracy seems very low at the moment, it almost seems to guess the opposite class for each image.

Im sorry Dusty I did not see your response. I will make those changes once the model is completed re-training. Thank-you for all of your help in this endeavour.

I would just leave the max batch size to the default of 1 (this is for inference), as a number like 300 could consume extra resources.

What was the accuracy that the training script reported? You may need to add more training images to your dataset. Also check that your class descriptions in the label file aren’t reversed and are in the same order as the dataset image directories are on disk. PyTorch uses the ordering of the directories on-disk to get the class ID’s (and does not use the label file), whereas the jetson-inference code gets the class ID’s from how they are ordered in the label file.

The accuracy was surprisingly high (~95%) but the class was incorrect. I suspect the reason was that I grouped multiple classes into one to simplify the results, which meant I combined multiple imaged types into one class. I think this may have confused the model so I am reverting to having 4 distinct classes. I will verify that the classes are in the correct order as well.

After the model was retrained the accuracy has gone back up and it seems to be guessing (mostly) the correct classes. The final step in the process is to use the webcam attached to take an image, and then give that image to the my-recognition application.

I can see that the camera-capture tool does something very similar to what I would like (that is saving an image from the webcam stream) however I have learned that using a part of this tools code does not work as it is part of a larger project.

What I am currently attempting is to use v4l2 to take an image and then feed that into the my-recognition, but it does not like the MJPG type that the image is saved as, giving me the error:

[image] failed to load ‘webcam_output.jpeg’
[image] (error: unknown marker)

What is the recommended course of action for feeding webcam images into the my-recognition example?

You could check the source of imagenet-camera program for an example of running classification networks on a camera stream. See here for a sample command line for launching it with your custom model:

[url]jetson-inference/pytorch-cat-dog.md at master · dusty-nv/jetson-inference · GitHub

You could then modify it as above to hard-code in your custom model.

To run the imagenet-camera program on a V4L2 camera, launch it with the --camera=/dev/video0 argument included in the command line (substitute your V4L2 camera’s /dev/video* device file if it is different than /dev/video0). You can also set the camera width and height to a resolution that your camera supports with the --width and --height arguments. For more info, see the camera arguments documentation on this page.

Using that would be perfect, the issue is I need to run it as a standalone project. Previously I have not had success with building the imagenet-camera application, getting errors like could not find gst/gst.h and assorted other files. If I could build that application and its libraries by itself and modify it, that would be perfect.

I have had success running the camera with v4l2 and saving the images is not an issue.However the my-recognition app does not like the jpg format the images are saved as. I imagine it must have something to do with the images being cast to a float4* before being saved in https://github.com/dusty-nv/camera-capture/blob/2499bf4220276a2be90c2d204d781d4836b26493/captureWindow.cpp.

saveImageRGBA() takes a float4 RGBA image and converts it to 8-bit RGB when saving it. Are you using a different library or tool to save the images?

OK, I was able to build imagenet-camera independently using the following CMakeLists.txt for standalone compilation and also by slightly changing the include statements at the top of imagenet-camera.cpp to reflect <jetson-inference/imageNet.h> and <jetson-utils/gstCamera.h> and so on:

# require CMake 2.8 or greater
cmake_minimum_required(VERSION 2.8)

# declare my-recognition project
project(my-recognition-camera)

# import jetson-inference and jetson-utils packages.
# note that if you didn't do "sudo make install"
# while building jetson-inference, this will error.
find_package(jetson-utils)
find_package(jetson-inference)

# CUDA is required
find_package(CUDA)

# add GStreamer and GLib include paths
include_directories(/usr/include/gstreamer-1.0 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/include /usr/include/glib-2.0 /usr/include/libxml2 /usr/lib/aarch64-linux-gnu/glib-2.0/include/)

# compile the my-recognition program
cuda_add_executable(my-recognition-camera imagenet-camera.cpp)

# link my-recognition to jetson-inference library
target_link_libraries(my-recognition-camera jetson-inference)

Note the include_directories statement in this CMakeLists.txt above that resolves the GStreamer and GLib paths.

And then the patched include statements at the top of imagenet-camera.cpp:

#include <jetson-utils/gstCamera.h>
#include <jetson-utils/glDisplay.h>
#include <jetson-utils/cudaFont.h>
#include <jetson-utils/commandLine.h>

#include <jetson-inference/imageNet.h>

The CMakeLists.txt above builds the project as the my-recognition-camera binary (you can change that to what you wish).
What I did to test it was create a new directory outside of the jetson-inference project called my-recognition-camera/ and then copied imagenet-camera.cpp to there, applied the include patches from above to imagenet-camera.cpp, and then put the CMakeLists.txt from above to there.