Building a Real-time Redaction App Using NVIDIA DeepStream, Part 1: Training

Originally published at:

Some of the biggest challenges in deploying an AI-based application are the accuracy of the model and being able to extract insights in real time. There’s a trade-off between accuracy and inference throughput. Making the model more accurate makes the model larger which reduces the inference throughput. This post series addresses both challenges. In part…

There is a typo mistake:
docker run -it --gpus all --rm --ipc=host -v $DATA_DIR:/data -v $WORKING_DIR:/src -w /src

The right container is (nvidia / not nvidian)
docker run -it --gpus all --rm --ipc=host -v $DATA_DIR:/data -v $WORKING_DIR:/src -w /src

The typo has been fixed on the blog.
Thank you Gary for pointing that out.


I am trying to train as documented in this blog but I am getting this error, due to optimizer modifications you suggested in
"optimizer = Adam(model.parameters(), lr=lr, weight_decay=0.0004, amsgrad=True)"

Here is the error:
optimizer = Adam(model.parameters(), lr=lr, weight_decay=0.0004, amsgrad=True)
NameError: name 'Adam' is not defined

To fix this, you need to document in the blog to replace:
from torch.optim import SGD
from torch.optim import Adam


Thank You Florin. Will get that fixed.

Can the model be open sourced ?
Many of us don't have compute to train on Open Images.

We unfortunately can’t open source the model due to licensing restrictions with the images, however you could consider train your own model using a GPU instance from any major cloud service provider.