Issue with ssd-mobielnetv2 using jetson-inference on jetson nano orin 8gb

when i runing this script
trinity@trinity-desktop:~/jetson-inference/python/examples$ python detectnet.py rtsp://admin:admin123@192.168.1.28:554/cam/realmonitor?channel=1&subtype=0 --overlay=box,labels,conf
i am getting error [cuda] cudaEventElapsedTime(&cuda_time, mEventsGPU[evt], mEventsGPU[evt+1])
[cuda] device not ready (error 600) (hex 0x258)
[cuda] /home/trinity/jetson-inference/build/aarch64/include/jetson-inference/tensorNet.h:769 so can any one help me and i also provied the script
import sys
import argparse
import cv2
import numpy as np
from jetson_inference import detectNet
from jetson_utils import videoSource, videoOutput, Log,cudaAllocMapped,cudaResize
import time

parse the command line

parser = argparse.ArgumentParser(description=“Locate objects in a live camera stream using an object detection DNN.”,
formatter_class=argparse.RawTextHelpFormatter,
epilog=detectNet.Usage() + videoSource.Usage() + videoOutput.Usage() + Log.Usage())

parser.add_argument(“input”, type=str, default=“”, nargs=‘?’, help=“URI of the input stream”)
parser.add_argument(“output”, type=str, default=“”, nargs=‘?’, help=“URI of the output stream”)
parser.add_argument(“–network”, type=str, default=“ssd-mobilenet-v2”, help=“pre-trained model to load (see below for options)”)
parser.add_argument(“–overlay”, type=str, default=“box,labels,conf”, help=“detection overlay flags (e.g. --overlay=box,labels,conf)\nvalid combinations are: ‘box’, ‘labels’, ‘conf’, ‘none’”)
parser.add_argument(“–threshold”, type=float, default=0.5, help=“minimum detection threshold to use”)

try:
args = parser.parse_known_args()[0]
except:
print(“”)
parser.print_help()
sys.exit(0)

create video sources and outputs

input = videoSource(args.input, argv=[“–input-codec=h264”, “–width=1020”, “–height=600”])
output = videoOutput(args.output, argv=sys.argv)

load the object detection network

net = detectNet(args.network, sys.argv, args.threshold)

note: to hard-code the paths to load a model, the following API can be used:

net = detectNet(model=“/home/trinity/jetson-inference/python/training/detection/ssd/models/custom_object_detection_model/yolov8m.onnx”, labels=“/home/trinity/jetson-#inference/python/training/detection/ssd/models/custom_object_detection_model/labels.txt”,

input_blob=“images”, output_cvg=“output0”, output_bbox=“output0”

threshold=args.threshold )

process frames until EOS or the user exits

while True:
# capture the next image
img = input.Capture()

if img is None: # timeout
    continue

print("image",img)
#img=cv2.resize(np.array(img),(800,640))
# detect objects in the image (with overlay)
print(args.overlay)
detections = net.Detect(img,img.width,img.height, overlay="box,labels,conf")

# print the detections
print("detected {:d} objects in image".format(len(detections)))

for detection in detections:
    print("detections",detection)

# render the image
output.Render(img)

# update the title bar
output.SetStatus("{:s} | Network {:.0f} FPS".format(args.network, net.GetNetworkFPS()))

# print out performance info
net.PrintProfilerTimes()

# exit on input/output EOS
if not input.IsStreaming() or not output.IsStreaming():
    break

Hi,

This error might come from the environment setting.

Which JetPack version do you use?
Have you checked out the branch for the BSP you use?

For example, L4T-R35.4.1 for JetPack 5?

Thanks.
Do you use the

i an using jetpack 6.0 in jetson orin nano 8gb ram and also how can i check it Have you checked out the branch for the BSP you use? and how can i check which bsp is i am using currently

ok, i have checked with jtop tool it shows model: nvidia jetson orin nano developer kit jetpack 6.0 [L4T 36.3.0] is it correct and give solution to solve above error

hi any update regarding this error

Hi,

So which branch do you use?

$ cd /home/trinity/jetson-inference
$ git branch

Thanks.

i am using *master branch

any update regarding this issue

Hi @vinaygouda.ttssl , I’m not entirely sure because of the web formatting or not, are you trying to load YOLOv8? For ONNX model, it expects ssd-mobilenet-v2 checkpoint from train_ssd.py exported from PyTorch.

Are you able to run detectnet on some images? python3 detectnet.py 'images/object_*.jpg'

i try with this it runs and but not detect objects in images
i run below cmd /jetson-inference/python/examples$ python3 detectnet.py ‘/home/trinity/jetson-inference/data/images/*.jpg’

even i run this objects are not geting detected and also geting error like this why [cuda] cudaEventElapsedTime(&cuda_time, mEventsGPU[evt], mEventsGPU[evt+1])
[cuda] device not ready (error 600) (hex 0x258)
[cuda] /home/trinity/jetson-inference/build/aarch64/include/jetson-inference/tensorNet.h:769

detectNet – number of object classes: 8400
[TRT] detectNet – maximum bounding boxes: 84
[TRT] loaded 80 class labels
[TRT] didn’t load expected number of class descriptions (80 of 8400)
i got like this which is not correct why this happens
and i am using pre-trained yolov8 model in onnx format (i converted the model into onnx format by using yolo cmd)

Hi @vinaygouda.ttssl, sorry for the delay - the repo doesn’t support yolov8 ONNX, it expects ssd-mobilenext ONNX in the code. You would need to adapt the pre/post-processing in jetson-inference/c/detectNet.cpp for YOLO.

Ultralytics library supports YOLOv8 in TensorRT:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.