How to Utilize Class ID in Object Detection to Control GPIO Pins?

Hey there, I would like to know:

  1. how may I determine which Object is being detected and how I could use that to control any GPIO to turn High or Low?
    (example: How to determine the Class ID, it’s way of coding and how to control it?)

  2. How may I include my retrained SSD-Mobilenet Model into this code:

net=jetson.inference.detectNet('ssd-mobilenet-v2',threshold=.5)

Hi,

1.

The detector usually returns the result for all the classes.
And you can add a custom filter to extract the class ID you want.

2.

Please check the following tutorial for the details:

Thanks.

Thank you for your reply.

  1. Could you kindly share an example code or a reference to how I can place a custom filter to ‘do something’ when a particular ClassID is detected?

  2. Also, I have already followed the tutorial and managed to complete the tutorial very well and easily. However, I am referring to the Retrained SSD Network.

For example, after I have retrained the SSD Mobilenet with transfer learning to detect 5 classes, how can I implement the ONNX file into this set of code?

net = jetson.inference.detectNet(argv=['--model=models/adas/ssd-mobilenet.onnx', '--labels=models/adas/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])

I have tested it using this code but received the following error:

jetson.inference -- detectNet loading network using argv command line params

detectNet -- loading detection network model from:
          -- prototxt     NULL
          -- model        models/adas/ssd-mobilenet.onnx
          -- input_blob   'input_0'
          -- output_cvg   'scores'
          -- output_bbox  'boxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels models/adas/labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - ONNX  (extension '.onnx')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file .1.1.7103.GPU.FP16.engine
[TRT]    cache file not found, profiling network model on device GPU

error:  model file 'models/adas/ssd-mobilenet.onnx' was not found.
        if loading a built-in model, maybe it wasn't downloaded before.

        Run the Model Downloader tool again and select it for download:

           $ cd <jetson-inference>/tools
           $ ./download-models.sh

[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
  File "/home/irfwas/Desktop/pypro/ForwardCollisionWarning/testRetrainedSSD.py", line 4, in <module>
    net = jetson.inference.detectNet(argv=['--model=models/adas/ssd-mobilenet.onnx', '--labels=models/adas/labels.txt', '--input_blob=input_0', '--output_cvg=scores', '--output_bbox=boxes'])
Exception: jetson.inference -- detectNet failed to load network

Hi @irfwas, it looks like your model cannot be found from the current path that you are running the script from. When you run your script, is your terminal located in the directory above your models/ folder?

Sure, here is a simple example:

# net is your detectNet model, img is your image
detections = net.Detect(img)

for detection in detections:
    class_name = net.GetClassDesc(detection.ClassID)

    if class_name == "car":
         # do something
    elif class_name == "person":
         # do something else

Hey @dusty_nv, thank you very much for your response as well as for your detailed example code for the Class ID. It is exactly what I needed.

In regards to the error I obtained for the re-trained SSD Model, what do you mean by " terminal located in the directory above your models/ folder"? My ‘.onnx’ model is located in the /models directory as shown here:
123322274_368837544322001_7845837446742984688_n

I tried to run the same program with the path changed to the full directory path of the model/ and the /labels.txt as shown in the full script file below, yet I am still receiving the same error as I received previously.

import jetson.inference
import jetson.utils

net = jetson.inference.detectNet(argv=['--model=home/jetson-inference/python/training/detection/ssd/models/adas/ssd-mobilenet.onnx', '--labels=home/jetson-inference/python/training/detection/ssd/models/adas/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])
camera = jetson.utils.videoSource("csi://0")      
display = jetson.utils.videoOutput("display://0") 

while display.IsStreaming():
	img = camera.Capture()
	detections = net.Detect(img)
	display.Render(img)
	display.SetStatus("Forward Collision Detection {:.0f} FPS".format(net.GetNetworkFPS()))

Try using a ~ instead of home - i.e. ~/jetson-inference/python/training/detection/ssd/models/adas/ssd-mobilenet.onnx

The Ubuntu file browser hides the fact that your home directory is actually /home/$USER/ (or ~ is a shortcut for that)

When using relative model paths, you want your terminal working directory to be ~/jetson-inference/python/training/detection/ssd. So cd to that directory before running your script.