How to Utilize Class ID in Object Detection to Control GPIO Pins?

Hey there, I would like to know:

  1. how may I determine which Object is being detected and how I could use that to control any GPIO to turn High or Low?
    (example: How to determine the Class ID, it’s way of coding and how to control it?)

  2. How may I include my retrained SSD-Mobilenet Model into this code:

net=jetson.inference.detectNet('ssd-mobilenet-v2',threshold=.5)

1 Like

Hi,

1.

The detector usually returns the result for all the classes.
And you can add a custom filter to extract the class ID you want.

2.

Please check the following tutorial for the details:

Thanks.

Thank you for your reply.

  1. Could you kindly share an example code or a reference to how I can place a custom filter to ‘do something’ when a particular ClassID is detected?

  2. Also, I have already followed the tutorial and managed to complete the tutorial very well and easily. However, I am referring to the Retrained SSD Network.

For example, after I have retrained the SSD Mobilenet with transfer learning to detect 5 classes, how can I implement the ONNX file into this set of code?

net = jetson.inference.detectNet(argv=['--model=models/adas/ssd-mobilenet.onnx', '--labels=models/adas/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])

I have tested it using this code but received the following error:

jetson.inference -- detectNet loading network using argv command line params

detectNet -- loading detection network model from:
          -- prototxt     NULL
          -- model        models/adas/ssd-mobilenet.onnx
          -- input_blob   'input_0'
          -- output_cvg   'scores'
          -- output_bbox  'boxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels models/adas/labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - ONNX  (extension '.onnx')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file .1.1.7103.GPU.FP16.engine
[TRT]    cache file not found, profiling network model on device GPU

error:  model file 'models/adas/ssd-mobilenet.onnx' was not found.
        if loading a built-in model, maybe it wasn't downloaded before.

        Run the Model Downloader tool again and select it for download:

           $ cd <jetson-inference>/tools
           $ ./download-models.sh

[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
  File "/home/irfwas/Desktop/pypro/ForwardCollisionWarning/testRetrainedSSD.py", line 4, in <module>
    net = jetson.inference.detectNet(argv=['--model=models/adas/ssd-mobilenet.onnx', '--labels=models/adas/labels.txt', '--input_blob=input_0', '--output_cvg=scores', '--output_bbox=boxes'])
Exception: jetson.inference -- detectNet failed to load network

Hi @irfwas, it looks like your model cannot be found from the current path that you are running the script from. When you run your script, is your terminal located in the directory above your models/ folder?

Sure, here is a simple example:

# net is your detectNet model, img is your image
detections = net.Detect(img)

for detection in detections:
    class_name = net.GetClassDesc(detection.ClassID)

    if class_name == "car":
         # do something
    elif class_name == "person":
         # do something else
1 Like

Hey @dusty_nv, thank you very much for your response as well as for your detailed example code for the Class ID. It is exactly what I needed.

In regards to the error I obtained for the re-trained SSD Model, what do you mean by " terminal located in the directory above your models/ folder"? My ‘.onnx’ model is located in the /models directory as shown here:
123322274_368837544322001_7845837446742984688_n

I tried to run the same program with the path changed to the full directory path of the model/ and the /labels.txt as shown in the full script file below, yet I am still receiving the same error as I received previously.

import jetson.inference
import jetson.utils

net = jetson.inference.detectNet(argv=['--model=home/jetson-inference/python/training/detection/ssd/models/adas/ssd-mobilenet.onnx', '--labels=home/jetson-inference/python/training/detection/ssd/models/adas/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])
camera = jetson.utils.videoSource("csi://0")      
display = jetson.utils.videoOutput("display://0") 

while display.IsStreaming():
	img = camera.Capture()
	detections = net.Detect(img)
	display.Render(img)
	display.SetStatus("Forward Collision Detection {:.0f} FPS".format(net.GetNetworkFPS()))
1 Like

Try using a ~ instead of home - i.e. ~/jetson-inference/python/training/detection/ssd/models/adas/ssd-mobilenet.onnx

The Ubuntu file browser hides the fact that your home directory is actually /home/$USER/ (or ~ is a shortcut for that)

When using relative model paths, you want your terminal working directory to be ~/jetson-inference/python/training/detection/ssd. So cd to that directory before running your script.

@dusty_nv

Hi, would this also work for Deepstreams python test-files? (e.g deepstream_test_1.py)

detections = net.Detect(img)

for detection in detections:
class_name = net.GetClassDesc(detection.ClassID)

if class_name == "car":
     # do something
elif class_name == "person":
     # do something else

Thanks,

Hi @Subframe , I’m not super familiar with coding the DeepStream python test files, however it appears that it should be possible - in this line of code, it gets the detected object class ID (from obj_meta.class_id variable)

https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/5cb4cb8be92e079acd07d911d265946580ea81cd/apps/deepstream-test1/deepstream_test_1.py#L84

Hi again @dusty_nv

Ok, what I’m looking for is to use the Transfer Learning Toolkit to train a custom dataset for deployment on the Jetson Nano.

What I’m having trouble with is how to the make practical use of detections. If I run inference and detect a road sign, how can I then play e.g an .mp3/.wav, or access the GPIO-pins to turn on a LED

I know it’s possible from Coding Your Own Object Detection Program but what if you have a much larger custom dataset? I also want inference to run as close to real time as possible.

Is TensorRT something that can be of use for solving this?

Maybe I am approaching this the wrong way and TLT is not the way to go?
I’m stuck! :)

Thank you,

The detections from DeepStream you can access from the frame/object metadata like the deepstream_test_1.py script does. For example, at the line I linked to above, you could do something like:

if obj_meta.class_id == 0:     # class 0 is 'Vehicle' in deepstream_test_1.py example
       # trigger audio/GPIO here

You could start by adding it right into that script (which hooks into the On-Screen-Display overlay callback), however in the future you could add a custom GStreamer element that just did GPIO/ect which was independent of the OSD display.

For further questions about the DeepStream internals, I recommend checking out the DeepStream forums, as I’m not the expert on those.

I would start by making a test program that just triggers the GPIO or plays the audio file, independent of any inferencing. Then once you get the GPIO/audio working, you can integrate it with the DeepStream Python sample.

For GPIO, there is the Jetson.GPIO library that comes with JetPack. And for audio, there are a bunch of Python libraries. I have used pyaudio library before, although if you just want to play a WAV/MP3, you could probably use the simpler playsound library for Python.

The jetson-inference library / Hello AI World uses TensorRT like DeepStream does. You can also train object detection models in the Hello AI World tutorial - you can run the PyTorch training scripts on your PC if your dataset is big. Then copy the exported ONNX to your Jetson.

TLT has additional features like pruning which will made the model even faster though, so if the best performance is your ultimate goal, check out using TLT + DeepStream.

Ok, I’ll try it out! Thanks!

hi,

did you get this to work? This is exactly what i’m trying to do.

could you share you’re code?

much appreciated

thanks chris

Hi,

Uploading a text file.

This is how I did it, I am not a programmer so I’m guessing there are other ways:)

I used deepstream_test_1.py

deep (9.4 KB)

My question is how I would go about to filter the incoming stream of data. Let’s say I only want to enable the LED once and not every time the it detects a road sign/car etc.

//Dennis

thanks, not a programmer either but trying to use the Jetson as a way to learn.

Did you have any luck with playing a .wav/.mp3 file?

After this I want to try and make a cancel button via gpio which clears all the detections on the screen

cheers

Best way to learn!

playsound · PyPI.

import playsound

def play():
playsound.playsound(‘alert.mp3’, True)

play()

I am getting same error when trying to load my custom model, can someone help? Thanks in advance!

Here’s my program
import jetson.inference
import jetson.utils
import argparse

net = jetson.inference.detectNet(argv=[‘–model=~/jetson-inference/python/training/detection/ssd/models/flex/ssd-mobilenet.onnx’, ‘–labels=~/jetson-inference/python/training/detection/ssd/models/flex/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’], threshold=0.5)

camera = jetson.utils.gstCamera(1280, 720, “/dev/video0”)
display = jetson.utils.glDisplay()

while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

Here’s the error from log:
error: model file ‘~/jetson-inference/python/training/detection/ssd/models/flex/ssd-mobilenet.onnx’ was not found.
if loading a built-in model, maybe it wasn’t downloaded before.

    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh

[TRT] detectNet – failed to initialize.
jetson.inference – detectNet failed to load network
Traceback (most recent call last):
File “fixture.py”, line 5, in
net = jetson.inference.detectNet(argv=[‘–model=~/jetson-inference/python/training/detection/ssd/models/flex/ssd-mobilenet.onnx’, ‘–labels=~/jetson-inference/python/training/detection/ssd/models/flex/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’], threshold=0.5)
Exception: jetson.inference – detectNet failed to load network

Hmm, can you try expanding the path to /home/YOUR-USER/ instead of ~/, or using relative path?

net = jetson.inference.detectNet(argv=['--model=/home/YOUR-USER/jetson-inference/python/training/detection/ssd/models/flex/ssd-mobilenet.onnx', '--labels=/home/YOUR-USER/jetson-inference/python/training/detection/ssd/models/flex/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'], threshold=0.5)

If that doesn’t work, please provide the full terminal log.

1 Like

Hi @dusty_nv , that works! Thanks so much for the help! I was wondering if I was missing something because when I moved the python script inside the “jetson-inference/python/training/detection/ssd” folder and changed it to:
net = jetson.inference.detectNet(argv=[‘–model=models/flex/ssd-mobilenet.onnx’, ‘–labels=models/flex/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’], threshold=0.5) it worked.

OK, cool - yea, I think that ~ gets expanded by the bash terminal - but since that was running inside the program, it didn’t get expanded.

You can also put your model inside jetson-inference/data/networks, and it should automatically find it (regardless of terminal working directory or location of your script) if you reference it as networks/flex/ssd-mobilenet.onnx This is because that jetson-inference/data/networks is searched for the models.

1 Like

@dusty_nv , Okay, that will make it simpler! thanks again!