camera demos seem to fail due to not finding imports?

Anyone managed to get the detectnet-camera.py to work.
All my own Opencv propgrams work but none of the examples…

Very strange… maybe I flashed the wrong code??
I just followed the instructions in getting started.

I need to get this running soon as it has been a week or so and I have not started doing anything with the difficult stuff yet. Writing my own python code to measure star fields…

Someone must know how to get the demos to work?

Hi web7ptya, did you run ‘sudo make install’ after running ‘make’ when building the jetson-inference project? Also try running ‘sudo ldconfig’ after the ‘sudo make install’ step.

Which Python version are you running? Python 2.7 or Python 3.6? If 3.6, you need to run ‘sudo apt-get install libpython3-dev’ before running cmake.

Thanks Dusty_nv.

I am running Python3.6 and I don’t remember doing sudo apt-get install libpython3-dev step.
However, did install but not ldconfig.

So I am not sure what is the best process to step through…
Step 1: ldconfig bit
Step 2: install libpython3-dev
Step 3: cmake (or maybe sudo cmake)

I assume all in the jetson-inference directory.

Regards,
John

Dusty_nv

I have been going through the build process on the getting started.
get a few warnings about directory not having permissions for wheel?

WARNING: The directory ‘/home/john/.cache/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.

However, it all came to a stop when I tried to install scipy…

Failed to build scipy
ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly

regards,
John

Dusty_nv
Tried keras also…

Full error trail:

Requirement already satisfied: six>=1.9.0 in ./.virtualenvs/deep_learning/lib/python3.6/site-packages (from keras) (1.12.0)
Requirement already satisfied: keras-applications>=1.0.8 in ./.virtualenvs/deep_learning/lib/python3.6/site-packages (from keras) (1.0.8)
Building wheels for collected packages: scipy
WARNING: Building wheel for scipy failed: [Errno 13] Permission denied: ‘/home/john/.cache/pip/wheels/4f’
Failed to build scipy
Building wheels for collected packages: pyyaml
WARNING: Building wheel for pyyaml failed: [Errno 13] Permission denied: ‘/home/john/.cache/pip/wheels/d9’
Failed to build pyyaml
ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly

When is it trying to install scipy during jetson-inference? I can’t recall that being a part of jetson-inference, maybe PyTorch.

Does Python 2.7 run the script ok?

See here for Python 3.6 instructions:

[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md#python-development-packages[/url]

You need to install the packages from that step with apt-get, then run “cmake …/”, then “make”, followed by “sudo make install” and “sudo ldconfig”.

It would seem the write permissions of your users pop cache are off for some reason.

Ok back to basics…
I know this works… But not on the Nano?

#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml
face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)
#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_eye.xml
eye_cascade = cv2.CascadeClassifier(‘haarcascade_eye.xml’)

cap = cv2.VideoCapture(0)

while 1:
ret, img = cap.read()[b][/b]
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in faces:
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    roi_gray = gray[y:y+h, x:x+w]
    roi_color = img[y:y+h, x:x+w]
    
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes:
        cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
    break

cap.release()
cv2.destroyAllWindows()

It gets an error when running faces = face_cascade
image is good as I have taken out the face/eye detect.
gray image is fine…

I can probably make some progress if I can get this simple program to work.
regards,
John

Hi John, I’m not familiar with OpenCV Python example, so feel free to create a new post about that. Let’s keep this topic about your issue with getting jetson-inference Python to work.

Dusty_nv
OK

john@john-desktop:~/jetson-inference/python/examples$ python imagenet-camera.py

jetson.inference.init.py
jetson.inference – initializing Python 2.7 bindings…
jetson.inference – registering module types…
jetson.inference – done registering module types
jetson.inference – done Python 2.7 binding initialization
jetson.utils.init.py
jetson.utils – initializing Python 2.7 bindings…
jetson.utils – registering module functions…
jetson.utils – done registering module functions
jetson.utils – registering module types…
jetson.utils – done registering module types
jetson.utils – done Python 2.7 binding initialization
jetson.inference – PyTensorNet_New()
jetson.inference – PyImageNet_Init()
jetson.inference – imageNet loading build-in network ‘googlenet’

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /usr/local/bin/networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT] loading network profile from engine cache… /usr/local/bin/networks/bvlc_googlenet.caffemodel.1.1.GPU.FP16.engine
[TRT] device GPU, /usr/local/bin/networks/bvlc_googlenet.caffemodel loaded
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT] device GPU, CUDA engine context initialized with 2 bindings
[TRT] binding – index 0
– name ‘data’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3 (CHANNEL)
– dim #1 224 (SPATIAL)
– dim #2 224 (SPATIAL)
[TRT] binding – index 1
– name ‘prob’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1000 (CHANNEL)
– dim #1 1 (SPATIAL)
– dim #2 1 (SPATIAL)
[TRT] binding to input 0 data binding index: 0
[TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 prob binding index: 1
[TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000
device GPU, /usr/local/bin/networks/bvlc_googlenet.caffemodel initialized.
[TRT] networks/bvlc_googlenet.caffemodel loaded
imageNet – loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
jetson.utils – PyFont_New()
jetson.utils – PyFont_Init()
jetson.utils – PyCamera_New()
jetson.utils – PyCamera_Init()
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1024, height=(int)768, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0
jetson.utils – PyDisplay_New()
jetson.utils – PyDisplay_Init()
[OpenGL] glDisplay – X screen 0 resolution: 1366x768
[OpenGL] glDisplay – display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
Segmentation fault (core dumped)

This is the result of :
john@john-desktop:~/jetson-inference/python/examples$ python imagenet-camera.py
same happens with Python3.

camera works fine as 0 in CV2
regards,
john

Thanks for posting the log - looks like you are able to import jetson.inference library in Python 2.7 and 3.6 now.

What camera are you using? By default it will try to use a MIPI CSI camera, but since GST_ARGUS is failing, it is unable to find/connect to a MIPI CSI camera.

If you are using a USB webcam, you should pass the V4L2 device (e.g. /dev/video0) to the --camera argument when starting the script (e.g. python imagenet-camera.py --camera=/dev/video0). For more info, see the documentation of the --camera command-line argument here:

[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-camera-2.md#running-the-live-camera-recognition-demo[/url]

Dusty_nv

Ok that was the answer… Working

Is there a face recognition example?

Thanks
John

Hi John, detectNet comes with a face detection DNN model -

[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-camera-2.md[/url]
[url]jetson-inference/detectnet-console-2.md at master · dusty-nv/jetson-inference · GitHub

In the free Jetson Nano DLI course, you also train your own DNN to locate your facial features with regression:

[url]https://courses.nvidia.com/courses/course-v1:DLI+C-RX-02+V1/about[/url]

Hello Dusty, I want to ask about object detection using detectnet-camera.py
Can we extract the name of the object that have been detected? and also are there any program for us to re-training using pytorch for object detection using detectnet-camera?

Hi m.billson16, the detectNet.Detection objects that the detectNet.Detect() function returns have a ClassID member for each detection. You can then use the detectNet.GetClassDesc() function to look-up the name string of each class, like so:

detections = net.Detect(img, width, height)

for detection in detections:
    className = net.GetClassDesc(detection.ClassID)

There aren’t currently, as the SSD-Mobilenet/Inception models are trained with TensorFlow.

If you were to re-train the SSD-Mobilenet-v1/v2 or SSD-Inception-v2 model in TensorFlow, you can convert it to TensorRT format using this tool:
https://github.com/AastaNV/TRT_object_detection

Hello Dusty, thank you so much for the advice. How about if we want to input some more specific new datasets like ‘soccerball’ or ‘basketball’, so how to train the system? so when we use detectNet, the system could recognize and detect the soccerball or basketball? and also is it possible for us to change the network like from pednet to googlenet or another network?