I am struggling to find a way to use pypy with the jetson inference and jetson utils modules

Greetings,

I have been working on a solution that uses detectnet to classify objects when given a trigger. Unfortunately using just python 3 is too slow. From sending a TCP-IP message from my PLC to the jetson, capturing an image, processing the image and returning the result, it takes around 150-250ms.

I need to be below 100ms.

I have tried and tried to optimize the code as best I can but I cannot seem to get any meaningful gains.

So I was about to begin writing it in c++, until a colleague of mine showed me pypy. I was able to install pypy in my container but when I run my code now, it throws an error:

File "server.py", line 4, in <module>
    import jetson.inference
ImportError: No module named jetson

I have searched and searched for a way to add the module to pypy, and I know they have a jetson inference emulator so I’m sure it’s possible.

I know it’s a long shot, but would anyone here have any idea how to make pypy see the jetson.inference and jetson.utils modules?

Thanks

Hi @Kizz, the jetson.inference and jetson.utils Python modules are actually implemented as C extension modules that use the Python C API. I am not sure if/how this works with pypy or not (have not heard of pypy before). My understanding is that these extension modules are built against CPython:

https://github.com/dusty-nv/jetson-inference/tree/master/python/bindings

Since the original detectnet.py runs much faster than this, it would seem that it’s the other networking code that was added that’s slowing it down. You may want to look at using Python threads or multiprocess to put the networking in a different thread or process, so it doesn’t slow down the camera capture + inferencing loop.

I’ve determined that the network code runs between 14 and 25ms (receive + send back), the camera capture and save takes around 30-40ms, and the model processing the image takes 105-185ms.

I am using multiprocessing to run the model. I will look into whether or not pypy can handle c extension modules and if not it looks like I’ll be moving on to C++.

Thanks for the help.

So reading the pypy FAQ states they do support them, but they’re slower then python or CFFI modules.

I’m trying to pip install the jetson module but it says can’t be found. Is the module called libjetson-inference?

It was able to find jetson.utils but failed with the following error on install:

Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
    ModuleNotFoundError: No module named 'setuptools'

It was unable to find jetson.inference

If these are not the correct modules, what are the correct modules to install that will give me access to inference and utils?

Are you using a custom detection model? Because even on Nano with the 90-class SSD-Mobilenet model, it only takes like 40ms for the model inference (on Xavier NX it’s much faster). So I wonder if something else is going on?

jetson.inference and jetson.utils aren’t on pip/PyPI. They are installed when you build the jetson-inference repo from GitHub. During the build, it builds the Python C extension modules and installs them. You are then able to import them from Python (I have only tested this for CPython)

I’m using the base detectnet model in order to test and optimize the code. I’m using it to analyze individual images and not video though.

OK - for processing individual images with maximum performance, you may want to run sudo jetson_clocks beforehand. This is because if the processing is sporadic, the CPU/GPU/memory clocks might be reduced if the workload is idle while it’s waiting for new images. Running jetson_clocks keeps the clocks at their maximum (as defined by the current nvpmodel).

I didn’t appear to find any real improvement using this command.

On another note, when I send the output.Render command it forces a window to open even though I’m only rendering one frame. Is there a way to save a result without rendering it to the desktop? I searched the docs for hours and couldn’t find a solution.

Is it better to use render to save the image or capture the image first and process it without rendering? My only issue with capturing it first is there is no way to review the results without saving via rendering.

Hi @Kizz, yes you can pass the --headless flag to videoOutput object when you create it like in this script:

https://github.com/dusty-nv/jetson-inference/blob/bed39a4ed9f6477ac7337295bf6713a46caebaec/python/examples/imagenet.py#L59

or simply like this:

output = jetson.utils.videoOutput(opt.output_URI, argv=sys.argv+['--headless'])

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.