Q&A for the webinar “Create Intelligent Places Using NVIDIA Pre-trained vision Models and DeepStream SDK”

Thank you for attending last week’s webinar, “Create Intelligent Places Using NVIDIA Pre-trained vision Models and DeepStream SDK”, presented by Monika Jhuria. We hope you found it informative. We received a lot of great questions at the end and weren’t able to respond to all of them. We are consolidating all follow-up questions and responding to DeepStream related topics in the following post.

  1. Could you explain more about the config file and its settings in docker? I have tried to run the example with connection to the cloud using jetson Tx2 but without result

Please post your configuration settings on the Deepstream developer forums to debug the issue.

  1. Does Deepstream 5.0 have any dependency on Tensorflow versions?

DeepStream doesn’t have any dependency on TensorFlow versions. Please check for dependency matrix : https://developer.nvidia.com/deepstream-getting-started

  1. Do you provide Python samples for the code from this webinar?

The webinar code is in C/C++. You can create similar applications in python as well.

  1. Can the analytics plugin information currently be accessed in python?

Yes.

  1. Is this demo running on a NVIDIA Jetson Javier hardware?

Demo is running on NVIDIA Jetson Xavier NX.

  1. Is there a tutorial for beginners on how to get one of these models and use DeepStream to run them?

Please check out DLI course on DeepStream: Courses – NVIDIA

  1. What is the recommended way to access the video generated by “smart record”?

Please check our developer blog (https://developer.nvidia.com/blog/building-iva-apps-using-deepstream-5-0-updated-for-ga/) and documentation (NVIDIA DeepStream SDK Developer Guide — DeepStream 6.1.1 Release documentation) for more information on smart record

  1. Is it possible to use Tensorflow models in DeepStream using the ONNX file format (converted using tf2onnx)?

Yes, also, you can use the Tensorflow model directly in Deepstream with Triton Inference Server.

  1. Can I write object counting plugin for multiple selected ROI in the input frame

You can add multiple ROI’s per stream. Modify the analytics configuration file to add the ROI labels. Check our documentation https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html#wwpID0E0NB0HA

  1. Are there any examples of using a custom Tensorflow/Pytorch model in DeepStream?

Yes, you can use these models using Triton Inference Server in DeepStream. Check out our developer tutorial: https://developer.nvidia.com/blog/training-custom-pretrained-models-using-tlt/

You can watch the recording and download the presentation slides from the original webinar link https://info.nvidia.com/iva-occupancy-webinar-reg-page.html?ondemandrgt=yes

For the response to the follow-up questions related to Transfer Learning Toolkit and Pre-trained models, please visit TLT forum.

If you have more questions, please feel free to post your questions in the forum so that we can further assist you

1 Like