Blur or Block a Class - Object Detection

Hi, I don’t know how to fully construct my question but is there a way to blur or make the bounding box opaque of a certain class in detectNet? If so, how? I have been searching the forums for it and I found some that is pointing towards the file detectNet.cpp and SetClassColor but I do not know how to utilize it. Can I have an example how to change it? for example my classes are Cat, Dog, Bird. I want to make the bb for Dog opaque.

Yes, you can use that. You can edit the detectnet sample under jetson-inference/examples/detectnet/detectnet.cpp

detectNet* net = detectNet::Create(cmdLine);
	
// find the class ID of the 'dog' class
int class_id_dog = -1;

for( int n=0; n < net->GetNumClasses(); n++ )
{
   if( strcasecmp(net->GetClassDesc(n), "dog") == 0 )
   {
       class_id_dog = n;
       break;
    }
}

if( class_id_dog < 0 )
{
    printf("couldn't find dog class\n");
    return 0;
}

// set the existing 'dog' color to opaque
float* dog_color = net->GetClassColor(class_id_dog);
net->SetClassColor(class_id_dog, dog_color[0], dog_color[1], dog_color[2], 255.0f);

Hi @dusty_nv, thanks again for the prompt response! Do I need to recompile after the change?

Here’s my python script. I am using a custom model. let me know if it still works the same as per your example. Thanks

import jetson.inference
import jetson.utils
import getpass

currentUser = getpass.getuser()

net = jetson.inference.detectNet(argv=[‘–model=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx’, ‘–labels=/home/’+curentUser+‘/jetson-inference/python/training/detection/ssd/models/AP/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.70’])

camera = jetson.utils.gstCamera(1280, 720, “/dev/video0”)
display = jetson.utils.glDisplay()

while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

Yes, you need to re-run make && sudo make install in your jetson-inference/build folder

Unfortunately there doesn’t appear to be the binding for SetClassColor() / GetClassColor() in the Python version of detectNet. The bindings are here if you needed to add it:

https://github.com/dusty-nv/jetson-inference/blob/2fb798e3e4895b51ce7315826297cf321f4bd577/python/bindings/PyDetectNet.cpp#L900

Hi @dusty_nv, I’m sorry, to be honest I have no idea how to add them to the bindings. I have no experience in cpp and still starting to learn python. If you can guide me I’ll greatly appreciate it.

I tried these though

#define DOC_SET_CLASS_COLOR "Set the custom class color\n\n" \"Parameters:\n" \" op (float) -- desired opacity, between 0.0 and 255.0\n" \" returns: (none)"

PyObject* PyDetectNet_SetClassColor( PyDetectNet_Object* self, PyObject* args )
{
	if( !self || !self->net )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet invalid object instance");
		return NULL;
	}
	
	float opacity = 255.0f;
        int class_id_custom = -1;
        float* custom_color = self->net->GetClassColor(class_id_custom);

	if( !PyArg_ParseTuple(args, "f", &opacity) )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet.SetClassColor() failed to parse arguments");
		return NULL;
	}
		
	self->net->SetClassColor(class_id_custom, custom_color[0], custom_color[1], custom_color[2], opacity);
	Py_RETURN_NONE;
}

then
{ "SetClassColor", (PyCFunction)PyDetectNet_SetClassColor, METH_NOARGS|METH_STATIC, DOC_SET_CLASS_COLOR},

No worries, @Rodimir_V, that looks pretty close! I think you just need an additional argument for the class index, and then a slight modification to the function registration line:

if( !PyArg_ParseTuple(args, "if", &class_id_custom, &opacity) )
{
	PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet.SetClassColor() failed to parse arguments");
	return NULL;
}

then
{ "SetClassColor", (PyCFunction)PyDetectNet_SetClassColor, METH_VARARGS, DOC_SET_CLASS_COLOR},

Remember to re-run make && sudo make install after you make the changes.

@dusty_nv, I’m glad that I was close! Do I need to add the GetClassColor also?

#define DOC_GET_CLASS_COLOR “Return the class color for the given object class.\n\n”
“Parameters:\n”
" (int) – index of the class, between [0, GetClassColor()]\n\n"
“Returns:\n”
" (string) – the text description of the object class"

// GetClassColor
static PyObject* PyDetectNet_GetClassColor( PyDetectNet_Object* self )
{
if( !self || !self->net )
{
PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE “detectNet invalid object instance”);
return NULL;
}

int classIdx = 0;

if( !PyArg_ParseTuple(args, "i", &classIdx) )
{
	PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet.GetClassColor() failed to parse arguments");
	return NULL;
}
	
if( classIdx < 0 || classIdx >= self->net->GetClassColor() )
{
	PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet requested class index is out of bounds");
	return NULL;
}

return Py_BuildValue("s", self->net->GetClassColor(classIdx));

}

{ GetClassColor, (PyCFunction)PyDetectNet_GetClassColor, METH_NOARGS, DOC_GET_CLASS_COLOR},
{ SetClassColor, (PyCFunction)PyDetectNet_SetClassColor, METH_VARARGS, DOC_SET_CLASS_COLOR},

I did the modification last night, but I felt like I just guessed on all of them. This one gives me error when I do “make”

I don’t think you actually need Python binding GetClassColor() for your use-case. But if you wanted it, it would be like:

static PyObject* PyDetectNet_GetClassColor( PyDetectNet_Object* self )
{
	if( !self || !self->net )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE “detectNet invalid object instance”);
		return NULL;
	}

	int classIdx = 0;

	if( !PyArg_ParseTuple(args, "i", &classIdx) )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet.GetClassColor() failed to parse arguments");
		return NULL;
	}
		
	if( classIdx < 0 )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet requested class index is out of bounds");
		return NULL;
	}
	
	float* color = self->net->GetClassColor(classIdx);

	PyObject* red   = PyFloat_FromDouble(color[0]);
	PyObject* green = PyFloat_FromDouble(color[1]);
	PyObject* blue  = PyFloat_FromDouble(color[2]);
	PyObject* alpha = PyFloat_FromDouble(color[3]);

	PyObject* tuple = PyTuple_Pack(4, red, green, blue, alpha);

	Py_DECREF(red);
	Py_DECREF(green);
	Py_DECREF(blue);
	Py_DECREF(alpha);

	return tuple;
}

Thanks a lot for the help @dusty_nv! I’ll give it a try!

Hi @dusty_nv , since we added those to PyDetectNet.cpp. How do I actually use the net.SetClassColor on my python script?

After you run make && sudo make install from your jetson-inference/build directory, you should then ‘magically’ be able to call net.SetClassColor(class_idx, alpha)

@dusty_nv, I’m sorry, I’m super slow. Is it something like this?

for detection in detections:
	class_num = detection.ClassID
	class_name = net.GetClassDesc(detection.ClassID)
	print(class_num)

	if class_name == "cat":
		print('not this')
	if class_num == 1:
		net.SetClassColor(class_num, '255.0f')

Hey no problem at all - you actually want to set the colors before the camera loop (like right after you load the network)

net = jetson.inference.detectNet(...)

for i in range(net.GetNumClasses()):
   class_name = net.GetClassDesc(i)

   if class_name.lower() == 'dog':
      net.SetClassColor(i, 255)

Hi @dusty_nv, like this correct? I’m getting Segmentation fault (core dumped)

import jetson.inference
import jetson.utils
import getpass

currentUser = getpass.getuser()

net = jetson.inference.detectNet(argv=[‘–model=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx’, ‘–labels=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/AP/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.70’])

for i in range(net.GetNumClasses()):
class_name = net.GetClassDesc(i)

if class_name.lower() == ‘dog’:
net.SetClassColor(i, 255)

camera = jetson.utils.gstCamera(1280, 720, “/dev/video0”)
display = jetson.utils.glDisplay()

while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

error:

detectNet – loading detection network model from:
– prototxt NULL
– model /home/rodimirv/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx
– input_blob ‘input_0’
– output_cvg ‘scores’
– output_bbox ‘boxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels /home/rodimirv/jetson-inference/python/training/detection/ssd/models/AP/labels.txt
– threshold 0.700000
– batch_size 1

[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/rodimirv/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT] loading network plan from engine cache… /home/rodimirv/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT] device GPU, loaded /home/rodimirv/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx
[TRT] Deserialize required 3201733 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 104
[TRT] – maxBatchSize 1
[TRT] – workspace 0
[TRT] – deviceMemory 19780608
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name ‘input_0’
– type FP32
– in/out INPUT
– # dims 4
– dim #0 1 (SPATIAL)
– dim #1 3 (SPATIAL)
– dim #2 300 (SPATIAL)
– dim #3 300 (SPATIAL)
[TRT] binding 1
– index 1
– name ‘scores’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 3000 (SPATIAL)
– dim #2 3 (SPATIAL)
[TRT] binding 2
– index 2
– name ‘boxes’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 3000 (SPATIAL)
– dim #2 4 (SPATIAL)
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 scores binding index: 1
[TRT] binding to output 0 scores dims (b=1 c=3000 h=3 w=1) size=36000
[TRT] binding to output 1 boxes binding index: 2
[TRT] binding to output 1 boxes dims (b=1 c=3000 h=4 w=1) size=48000
[TRT]
[TRT] device GPU, /home/rodimirv/jetson-inference/python/training/detection/ssd/models/AP/ssd-mobilenet.onnx initialized.
[TRT] detectNet – number object classes: 3
[TRT] detectNet – maximum bounding boxes: 3000
[TRT] detectNet – loaded 3 class info entries
[TRT] detectNet – number of object classes: 3
Segmentation fault (core dumped)

Ah, I think I missed something in your SetClassColor() binding - can you change it to:

PyObject* PyDetectNet_SetClassColor( PyDetectNet_Object* self, PyObject* args )
{
	if( !self || !self->net )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet invalid object instance");
		return NULL;
	}
	
	float opacity = 255.0f;
	int class_id_custom = -1;
        
	if( !PyArg_ParseTuple(args, "if", &class_id_custom, &opacity) )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet.SetClassColor() failed to parse arguments");
		return NULL;
	}

	if( class_id_custom < 0 )
	{
		PyErr_SetString(PyExc_Exception, LOG_PY_INFERENCE "detectNet requested class index is out of bounds");
		return NULL;
	}
		
	float* custom_color = self->net->GetClassColor(class_id_custom);
	self->net->SetClassColor(class_id_custom, custom_color[0], custom_color[1], custom_color[2], opacity);
	Py_RETURN_NONE;
}

The self->net->GetClassColor(class_id_custom) was getting called before class_id_custom was parsed from the arguments, so it needed to be moved to lower in the code. Also remember to re-run make && sudo make install after you change it.

1 Like

Hi @dusty_nv, amazing! that works! Thanks for solving this within the day! great support dusty!

Hi Rodimir, no problem, glad you got it working!