I have trained an object detection model in which there is only one label-‘plant’ because of a project I am currently doing.
I need a live feed from the camera to detect plants and later i will use plants to navigate in a garden.
But using the code to turn on my camera and detect objects i am also getting detections like person box etc which i don’t need and might be an obstacle for navigation. Basically, the way i trained the model i am not getting those outputs but rather getting them all together. But this doesn’t happen when i use the command prompt. But only takes place when i am using python. The code and all the screenshots are posted below.
Just to be clear i want to know how can use the model in python with video source /dev/video0
Thank you in advance. The forum has been really helpful to me.
the image was taken from the recorded video using the command line: detectnet --model=models/model0110/ssd-mobilenet.onnx --label=models/model0110/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes /home/nano/garden/test.mp4
code that i am using:
import jetson.inference
import jetson.utils
@dusty_nv i followed your video. But labeled it with labelimg. This is the output what i get if i run the video as my videoSource in python. The code is also your’s. Please let me know what i am doing wrong.
Great the existing code worked. For the first one i got an error:
Traceback (most recent call last):
File “testcam.py”, line 4, in
net = detectNet(model=“model/ssd-mobilenet.onnx”, labels=“model/labels.txt”,
NameError: name ‘detectNet’ is not defined
Thanks a lot. It means a lot to me when you guys help me out.
No problem, glad you got it working! You probably need to change it to jetson.inference.detectNet() – but unless you cloned/installed jetson-inference master within the past couple months (IIRC), that newer way may not work.