I can`t build engine with tensorrt on nano but on ubuntu16-server ?

I can use the tensorrt on my ubuntu16.04-server with the command:
python detect_objects.py images/1.jpg
But it is failed on my jetson nano.The error is below:
‘NoneType’ objects has no attribute ‘serialize’
dpkg - lls
ssudo mkdir
I copied the file uff_ssd on my nano to my server and compile it, and then tested it with the same command as above, and the result was successful.
So, it means the code on my nano’s tensorrt is wright.

drats, I remember getting this error as well… what did I do?
hmmm try swap?

Hi,

'NoneType' objects has no attribute 'serialize'

This error may occur when there is a CUDA version mismatch between DL frameworks and TensorRT.
Could you check which CUDA version do you use for the model first?

Thanks.

My CUDA is 10.0.
The uff_ssd cannot use the detect_objects.py.
I have used the sampleUffSSD to convert thr ‘.pb’ to ‘.uff’.
Then I added the path of ‘.uff’ file in detect_objects.py.
Finally, it is worked with the command:‘python3 detect_objects.py images/1.jpg’
But the inference time nearly 200 ms.
I trained the model with my own datasets, and the net framework is mobilenet_ssd_v2. I don`t know why the inference time is too longer than the official doc.

My CUDA is 10.0.
The uff_ssd cannot use the detect_objects.py.
I have used the sampleUffSSD to convert thr ‘.pb’ to ‘.uff’.
Then I added the path of ‘.uff’ file in detect_objects.py.
Finally, it is worked with the command:‘python3 detect_objects.py images/1.jpg’
But the inference time nearly 200 ms.
I trained the model with my own datasets, and the net framework is mobilenet_ssd_v2. I don`t know why the inference time is too longer than the official doc.

Hi,

If your model is mobilenet_ssd_v2, it’s recommended to check our sample first:
[url]https://github.com/AastaNV/TRT_object_detection[/url]

We also convert mobilenet_ssd_v2 into TensorRT engine and the inference time is 46ms without image read/write.
Thanks.