Object detection, Jetson-inference, cant start up

Hello, I need some help and dont know where to search for, I searched and testet a lots last days, but nothing. I am new in this.
About my project:
Jetson Nano 2 GB Developer Kid, Raspberry Pi Camera V2. I would like to recognize and detect the objects, where they are, as in the demo from (jetson-inference in GitHub).
About my problem : I run the demo for object detection from Jetson-inference without problem, I made all steps from there, but I try to run one code from Toptechboy.com (Ai on the Jetson nano-Lesson 53), but its give errors and cant start any Video. I will send the code here and picture from the error>

I will be very happy for your advices or solution. Thank you very much!!!

import jetson.inference
import jetson.utils
import time
import cv2
import numpy as np

timeStamp=time.time()
fpsFilt=0
net=jetson.inference.detectNet('SSD-Mobilenet-v2',threshold=.5)
dispW=1280
dispH=720
flip=2
font=cv2.FONT_HERSHEY_SIMPLEX

# Gstreamer code for improvded Raspberry Pi Camera Quality
#camSet='nvarguscamerasrc wbmode=3 tnr-mode=2 tnr-strength=1 ee-mode=2 ee-strength=1 ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.2 saturation=1.2 ! appsink'
cam=cv2.VideoCapture("csi://")
#cam=jetson.utils.gstCamera(dispW,dispH,'0')

# cam=cv2.VideoCapture('/dev/video1')   # print from USB-Camera
cam.set(cv2.CAP_PROP_FRAME_WIDTH, dispW)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, dispH)

#cam=jetson.utils.gstCamera(dispW,dispH,'/dev/video1')
#display=jetson.utils.glDisplay()
#while display.IsOpen():
while True:
    #img, width, height= cam.CaptureRGBA()
    _,img = cam.read()
    height=img.shape[0]
    width=img.shape[1]

    frame=cv2.cvtColor(img,cv2.COLOR_BGR2RGBA).astype(np.float32)
    frame=jetson.utils.cudaFromNumpy(frame)

    detections=net.Detect(frame, width, height)
    for detect in detections:    # d=detect in murtaza
        #print(detect)
        ID=detect.ClassID
        top=int(detect.Top)
        left=int(detect.Left)
        bottom=int(detect.Bottom)
        right=int(detect.Right)
        item=net.GetClassDesc(ID)  # item is a object, koito zasicha
       # print(item,top,left,bottom,right)
        cx,cy=int(detect.Center[0]),int(detect.Center[1])  # ot murtaza kod , koordinati na centyra na obekta
        cv2.circle(img, (cx,cy),5 ,(0,255,0),cv2.FILLED) # ot murtaza kod

        cv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0), 2)
        cv2.putText(img, item, (left, top + 15), font, 0.75, (0, 0, 255), 2)
    #display.RenderOnce(img,width,height)
    dt=time.time()-timeStamp
    timeStamp=time.time()
    fps=1/dt
    fpsFilt=.9*fpsFilt + .1*fps
    #print(str(round(fps,1))+' fps')
    cv2.putText(img,str(round(fpsFilt,1))+' fps',(0,30),font,1,(0,0,255),2)   # print fps on streaming
    cv2.imshow('detCam',img)
    cv2.moveWindow('detCam',0,0)
    if cv2.waitKey(1)==ord('q'):
        break
cam.release()
cv2.destroyAllWindows()
1 Like
import jetson.inference
import jetson.utils
import time
import cv2
import numpy as np

timeStamp=time.time()
fpsFilt=0
net=jetson.inference.detectNet('SSD-Mobilenet-v2',threshold=.5)
dispW=1280
dispH=720
flip=2
font=cv2.FONT_HERSHEY_SIMPLEX

# Gstreamer code for improvded Raspberry Pi Camera Quality
#camSet='nvarguscamerasrc wbmode=3 tnr-mode=2 tnr-strength=1 ee-mode=2 ee-strength=1 ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.2 saturation=1.2 ! appsink'
cam=cv2.VideoCapture("csi://")
#cam=jetson.utils.gstCamera(dispW,dispH,'0')

# cam=cv2.VideoCapture('/dev/video1')   # print from USB-Camera
cam.set(cv2.CAP_PROP_FRAME_WIDTH, dispW)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, dispH)

#cam=jetson.utils.gstCamera(dispW,dispH,'/dev/video1')
#display=jetson.utils.glDisplay()
#while display.IsOpen():
while True:
    #img, width, height= cam.CaptureRGBA()
    _,img = cam.read()
    height=img.shape[0]
    width=img.shape[1]

    frame=cv2.cvtColor(img,cv2.COLOR_BGR2RGBA).astype(np.float32)
    frame=jetson.utils.cudaFromNumpy(frame)

    detections=net.Detect(frame, width, height)
    for detect in detections:    # d=detect in murtaza
        #print(detect)
        ID=detect.ClassID
        top=int(detect.Top)
        left=int(detect.Left)
        bottom=int(detect.Bottom)
        right=int(detect.Right)
        item=net.GetClassDesc(ID)  # item is a object, koito zasicha
       # print(item,top,left,bottom,right)
        cx,cy=int(detect.Center[0]),int(detect.Center[1])  # ot murtaza kod , koordinati na centyra na obekta
        cv2.circle(img, (cx,cy),5 ,(0,255,0),cv2.FILLED) # ot murtaza kod

        cv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0), 2)
        cv2.putText(img, item, (left, top + 15), font, 0.75, (0, 0, 255), 2)
    #display.RenderOnce(img,width,height)
    dt=time.time()-timeStamp
    timeStamp=time.time()
    fps=1/dt
    fpsFilt=.9*fpsFilt + .1*fps
    #print(str(round(fps,1))+' fps')
    cv2.putText(img,str(round(fpsFilt,1))+' fps',(0,30),font,1,(0,0,255),2)   # print fps on streaming
    cv2.imshow('detCam',img)
    cv2.moveWindow('detCam',0,0)
    if cv2.waitKey(1)==ord('q'):
        break
cam.release()
cv2.destroyAllWindows()

Hi @user50863, can you provide the error info and console output from when you run this code? Thanks.

Hello, Thank you very much! I made some changes and comment some lines, now I have a video for one first view/sec and then block and come this error.(in the picture)

Is this enought or you need all info from the terminal?

The error is related to the camera capture, I think because you don’t pass the GStreamer pipeline with nvarguscamerasrc into cv2.VideoCapture() – instead that is commented out.

See this cv2.VideoCapture() example from JetsonHacks – https://github.com/JetsonHacksNano/CSI-Camera/blob/master/simple_camera.py

If you still have issues, my recommendation is to start with the code from detectnet.py and use that for video capture.

Thank you very much for your answer! :) now its works, but sometime just stops, and I need to reboot the system. Any suggestions?

Could I ask you one more question: I would like to make my board to run automatically one code, after power on, because I want to put it in a robot. How could I do that or do you know article with full steps of it. I searched but isn’t much information and isnt for me good and clear.

I’m not familiar with the modifications made to the code, so I’m not exactly sure, sorry about that. Is there any error printed out?

Please refer to this topic:

nanorobo@nanorobo-desktop:~/Desktop/Python_projects$ /usr/bin/python3 “/home/nanorobo/Desktop/Python_projects/object detection/object_detection_Toptechboy.py”
jetson.inference – detectNet loading build-in network ‘ssd-mobilenet-v2’

detectNet – loading detection network model from:
– model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
– input_blob ‘Input’
– output_blob ‘NMS’
– output_count ‘NMS_1’
– class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 8.0.1
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +203, GPU -23, now: CPU 227, GPU 1925 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8001.GPU.FP16.engine
[TRT] loading network plan from engine cache… /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8001.GPU.FP16.engine
[TRT] device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 266, GPU 1947 (MiB)
[TRT] Loaded engine size: 38 MB
[TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 266 MiB, GPU 1947 MiB
[TRT] Using cublas a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU -25, now: CPU 442, GPU 1906 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +240, GPU +37, now: CPU 682, GPU 1943 (MiB)
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 682, GPU 1933 (MiB)
[TRT] Deserialization required 106581902 microseconds.
[TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 682 MiB, GPU 1927 MiB
[TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 682 MiB, GPU 1933 MiB
[TRT] Using cublas a tactic source
[TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +1, now: CPU 683, GPU 1934 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 683, GPU 1934 (MiB)
[TRT] Total per-runner device memory is 27449856
[TRT] Total per-runner host memory is 132800
[TRT] Allocated activation device memory of size 14261248
[TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 685 MiB, GPU 1919 MiB
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 123
[TRT] – maxBatchSize 1
[TRT] – deviceMemory 14261248
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name ‘Input’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3
– dim #1 300
– dim #2 300
[TRT] binding 1
– index 1
– name ‘NMS’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1
– dim #1 100
– dim #2 7
[TRT] binding 2
– index 2
– name ‘NMS_1’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1
– dim #1 1
– dim #2 1
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT] device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet – maximum bounding boxes: 100
[TRT] detectNet – loaded 91 class info entries
[TRT] detectNet – number of object classes: 91
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 3264 H = 2464
seconds to Run = 0
Frame Rate = 21.000000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
CONSUMER: Done Success
Traceback (most recent call last):
File “/home/nanorobo/Desktop/Python_projects/object detection/object_detection_Toptechboy.py”, line 38, in
height=img.shape[0]
AttributeError: ‘NoneType’ object has no attribute ‘shape’
GST_ARGUS: Cleaning up
Terminated
nanorobo@nanorobo-desktop:~/Desktop/Python_projects$ ^C
nanorobo@nanorobo-desktop:~/Desktop/Python_projects$

import jetson.inference
import jetson.utils
import time
import cv2
import numpy as np

timeStamp=time.time()
fpsFilt=0
net=jetson.inference.detectNet(‘ssd-mobilenet-v2’,threshold=.5)
dispW=640
dispH=360
flip=2
font=cv2.FONT_HERSHEY_SIMPLEX

Gstreamer code for improvded Raspberry Pi Camera Quality

camSet=‘nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink’

#camSet=‘nvarguscamerasrc wbmode=3 tnr-mode=2 tnr-strength=1 ee-mode=2 ee-strength=1 ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.2 saturation=1.2 ! appsink’
#cam=cv2.VideoCapture(camSet)
cam=cv2.VideoCapture(camSet,cv2.CAP_GSTREAMER)
#cam=jetson.utils.gstCamera(dispW,dispH,‘csi://0’)

#cam=cv2.VideoCapture(‘csi://0’) # print
#cam=cv2.VideoCapture(‘/dev/video0’) # print from USB-Camera
#cam.set(cv2.CAP_PROP_FRAME_WIDTH, dispW)
#cam.set(cv2.CAP_PROP_FRAME_HEIGHT, dispH)

#cam=jetson.utils.gstCamera(dispW,dispH,‘camSet’)
#cam=jetson.utils.gstCamera(dispW,dispH,‘/dev/video0’)
#display=jetson.utils.glDisplay()
#while display.IsOpen():
while True:
#img, width, height= cam.CaptureRGBA()
_,img = cam.read()
#,img = cam.read()
height=img.shape[0]
width=img.shape[1]

    #   height=dispH
    #width=dispW

frame=cv2.cvtColor(img,cv2.COLOR_BGR2RGBA).astype(np.float32)
frame=jetson.utils.cudaFromNumpy(frame)

detections=net.Detect(frame, width, height)
for detect in detections:    # d=detect in murtaza
    #print(detect)
    ID=detect.ClassID
    top=int(detect.Top)
    left=int(detect.Left)
    bottom=int(detect.Bottom)
    right=int(detect.Right)
    item=net.GetClassDesc(ID)  # item is a object, koito zasicha
   # print(item,top,left,bottom,right)
    cx,cy=int(detect.Center[0]),int(detect.Center[1])  # ot murtaza kod , koordinati na centyra na obekta
    cv2.circle(img, (cx,cy),5 ,(0,255,0),cv2.FILLED) # ot murtaza kod

    cv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0), 2)
    cv2.putText(img, item, (left, top + 15), font, 0.75, (0, 0, 255), 2)
#display.RenderOnce(img,width,height)
dt=time.time()-timeStamp
timeStamp=time.time()
fps=1/dt
fpsFilt=.9*fpsFilt + .1*fps
#print(str(round(fps,1))+' fps')
cv2.putText(img,str(round(fpsFilt,1))+' fps',(0,30),font,1,(0,0,255),2)   # print fps on streaming
cv2.imshow('detCam',img)
cv2.moveWindow('detCam',0,0)
if cv2.waitKey(1)==ord('q'):
    break

cam.release()
cv2.destroyAllWindows()

Thank you very much! I will read the articles. I send as the code and the error.

It seems camera skipped a frame, or was unable to capture. I would recommend to check if img is None and skip that frame if it is None.

where do I have to make this if statements?

After the frame is captured and before it is actually used. Sorry, I am unable to edit the code for you. I recommend going back to detectnet.py if you are having trouble.

Ok, I will try. Thank you very much, for the advices! It is a help for me!
Could I ask you again for the automatically run, if you understand well the system. It is really so simple just make this file in this directory (with those lines of code ) and done or have to make some other commands? I am afraid of, that will block something and couldnt get back the normal statement of the board . Or even couldnt anymore to login in the board. How could I break up the process, if I need or just want to play with board again.

Aside from using persistent docker containers (which automatically restart), I haven’t done custom start-up services myself (sorry about that), so my best suggestion is to try the ways that are highlighted in this post:

https://forums.developer.nvidia.com/t/how-to-make-application-run-automatically-when-power-on/178789

Hello, thank you again. I followed the steps and i made such file in such directory, but nothing happen, when i power up the board. I proved if the path is correct, and it is. Any suggestions?

Hi @user50863, since this isn’t directly related to jetson-inference, I would recommend opening a new topic about creating start-up services. You may want to test that it’s working with a dummy Python script first that just prints out text or something, so that you can confirm that the start-up service is working.

Ok, Thank you very much for your answers! :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.