Hi, as I already discussed in this thread, I have a problem when trying to run inference, with a TensorRT model on a Jetson Nano
More specifically, the problem seems to be related to the custom plugin which I need to add to the model. The engine serializes with no problems, but when I try to use that for inference i get this assertion error:
Also, the model deserializes correctly in all the platforms that I had the opportunity to test on (rts2080ti, T4, V100), but the Jetson Nano;
so it looks like it is a very platform specific problem
I discussed this in the TensorRT forum, but as far as they know there is no platform specific setting that I have to use when running on a Nano;
further info on the discussion is at the link at the beginning of this post
Hi,
thanks for your reply;
I tried your suggestion and updated to the GA version of Jetpack, but the problem persists;
Originally I thought the problem was in deserializing the custom plugin, but after some tests with a dummy custom plugin I found out that it serializes/deserializes correctly;
the element that appears to be problematic is NMSPlugin, since I receive this error at deserialization time