Looking for example code on deploying CNN models that run inferencing on a live camera feed

I’m using a frozen model (.pb/.uff) which I would like to deploy onto the Drive PX to run inferencing on the live camera feed from an AR0231 camera attached to the Drive PX’s A camera group. Unfortunately, I cannot find any documentation on camera inferencing on a host Drive PX and would appreciate any help.

Hi,

Since the Drive PX2 is based on TX2 I think GstInference can be of some help to you. It is an open-source project from Ridgerun that provides a framework for integrating CNN model into GStreamer. You can use it to run inference on a GStreamer camera stream.

Right now only a few models are supported (Inception, TinyYOLO, FaceNet, ResNet, and MobileNet) and two backends (TensorFlow and NCSDK), but adding support for new models is easy and only involves implementing your model’s pre-process and post-process on the placeholder functions we provide.

If you want to know mode about GstInference, please check our repo and wiki page:
[url]https://github.com/RidgeRun/gst-inference[/url]
[url]https://developer.ridgerun.com/wiki/index.php?title=GstInference[/url]

Hi @miguel.taylor ,
Can you share more on how to retrain MobileNet V2 300x300 for detection using DeepStream?
If you have done so, can you let me know your inference speed?

And, how many RTSP cameras can you connect using dGPU for inferencing with DeepStream?

Thanks,

Can you share more on how to retrain MobileNet V2 300x300 for detection using DeepStream?

We haven’t used MobileNet with DeepStream, only with GstIference. But it should be trivial to use it with DeepStream by training the Caffe implementation and using the model files to generate an engine.

If you have done so, can you let me know your inference speed?
And, how many RTSP cameras can you connect using dGPU for inferencing with DeepStream?

As I said, we haven’t tested MobileNet with DeepStream.

1 Like