"Developing and Testing TensorFlow-based Project with GStreamer and Hardware Video Encoding on Jetson Nano"

I’ve created a Docker container for my TensorFlow-based project that relies on Python 3.8 or later and requires GStreamer, hardware video encoding and decoding. The container size is 15GB, and it currently supports 5 parallel full HD video processing with dgpu support. However, I need to process at least one video channel using the Jetson Nano. Is it possible to develop and test the code with the Jetson Nano alone? Do I need to buy an extra SSD or hardware to buy along with it?


Nano needs the l4t based container which includes the GPU driver for integrated GPU.
Usually, the image for desktop GPU cannot work on Jetson.

Since Nano has limited storage, you can use an external drive to expand the size.
Here is a related topic for your reference: Boot from external drive

To run a TensorFlow model with GStreamer, it’s recommended to check our Deepstream SDK which already supports hardware video encoding/decoding.
You can use the Triton server to deploy a model with TensorFlow.

For Nano, you will need to use Deepstream 6.0.1.

I was using nvcr.io/nvidia/l4t-tensorflow:r35.1.0-tf2.9-py3 container to run the tensorflow, but had this error

InternalError: cudaGetDevice() failed. Status: initialization error when running tensorflow
What could be the issue?


Please match the OS version between the container and your native setup.
r35.1.0 doesn’t support Nano. You will need the container that is built on r32 which is corresponding to JetPak 4.6.x.



This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.