Hereās an object detection example in 10 lines of Python code using SSD-Mobilenet-v2 (90-class MS-COCO) with TensorRT, which runs at 25FPS on Jetson Nano on a live camera stream with OpenGL visualization:
I have run into a minor snag. I choose to install just PyTorch v1.1.0 for Python 3.6 when I was doing the initial install. On the next step when I tried to run the imagenet_console.py I got an error telling me that a python 2.7 .h file was missing. (I didnāt grab the full error message sorry). I then ran the C++ example without a problem before going back to the PyTorch installer and selected the missing package.
I again ran the following
$ cd jetson-inference/build
$ ./install-pytorch.sh
$ make
$ sudo make install
and then tested it out as follows
$ ./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg
jetson.inference.init.py
Traceback (most recent call last):
File ā./imagenet-console.pyā, line 24, in
import jetson.inference
File ā/usr/lib/python2.7/dist-packages/jetson/inference/init.pyā, line 4, in
from jetson_inference_python import *
ImportError: libjetson-utils.so: cannot open shared object file: No such file or directory
Should I set up a new image and start over selecting both Python packages first time through?
display.SetTitle doesnāt show āObject Detection | Network {:.0f} FPSā.format(1000.0 / net.GetNetworkTime())".
Should this text be printed in the image and how can I change the text color?
Hmm that is strange, that text shows up in the windowās title bar for me.
You can see example of text rendering in imagenet-camera.py. To change the color, modify the font.White argument to one of these from here (or pass in your own RGB tuple): [url]Python: package jetson.utils
Hi,
I know what it was. My host is a Windows laptop.
I started the program with the Python code directly from PuTTy and then it gave only the image with the object detection in the VNC viewer because it doesnāt know the Microsoft title bar.
But I had to start a desktop like xfce4 from PuTTy and then from the desktop in the VNC viewer start an XTerm console and start the program there. And then it shows up with its own title bar.
Works with 20FPS.
Hi simone.rinaldi, there shouldnāt be CUDA memory being allocated during the main loop, as it should all be pre-allocated, however I will look into it to make sure. As indicated in the status bar text and terminal output, the framerate given is for the network time - depending on your camera the global framerate may be lower. The visualization code that draws the bounding boxes and renders the image adds overhead, as the device can typically be deployed to headless systems without display, which is provided primarily for testing purposes.
Iām using a Logitech C270 that has video output at 720p30fps.
Anyway I was able to use net.Detect with an IP camera, as shown here:
And result is always the same, in fact conversion from numpy to cuda takes 40-70ms dropping down frames.
I understand what you say: āthe framerate given is for the network timeā¦ā but network requires data formatted in particular way so preparation of data cannot be considered separately from network execution time.
PS: IP camera is a Dahua IPC-HFW1431S 4K25fps configured to 1080p25fps
Using IP camera has extra overhead for networking and depacketization, and in the case of this compressed camera, decoding and going through OpenCV. The example numpy to CUDA routine wasnāt intended for realtime use, as the incoming numpy array can be of arbitrary dimensions and format and requires extra data conversion. If you want a path with less overhead you should eliminate use of OpenCV, which suboptimally copies the memory and stores it in numpy array. You can allocate the CUDA memory from python with the jetson.utils.cudaAllocMapped() function.
Pre-processing the data in CUDA to the planar NCHW format that the DNN expects does not take that long, on average around 0.5 milliseconds. You can see this in the Timing Report in the console - what is taking you the extra time is the use of OpenCV capture, which stores the image in CPU numpy array, and the subsequent numpy conversion. You can also set the camera to a lower resolution because the object detection DNN downsamples it.
Iām having issues running the live camera output when working with the Hello AI World exercises on JupyterLab. I am running the commands through the terminal launcher that the github pages say to run through the Ubuntu ā right click ā open terminal area. It works perfectly on Ubuntu, outputting the live camera object detection and segmentation exercises, but cannot seem to get this same live camera output on JupyterLab.
Iāve not tried these through JupyterLab - the camera apps in Hello AI World create an OpenGL display on the Jetson. Do you have a display directly connected to your Nano, or are you trying to view them remotely over the network (headless)?
Iāve tried it on a monitor display where it works perfectly on the OpenGL display it creates on the Ubuntu OS. What Iām trying to see is if a similar output is able to be generated on the headless mode. Thanks for the reply and help!
Hi lramos13, it isnāt supported by the project to view the OpenGL video headlessly with SSH forwarding. Even if it were to work, it would display the video very slowly. Such an approach would typically use video compression and RTP/RTSP streaming. There is a gstEncoder class included with jetson-utils that works with RTP, but admittedly I have not used it for that in some time.
I have a really basic question- I have a pretrained network that expects a 224x224 image, but I canāt use it until I figure out how to crop and resize the 1280x720 camera image to the dimensions that the network is expecting. Iāve been searching through the docs but there is no information on how to prepare the input image or why the aspect ratio doesnāt seem to matter.