TensorRT Jetson Tx2 Code not working for 1080ti

Hi,

So I ran the image classification benchmarks for Jetson Tx2 and it worked very well.
https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification

Now, I would like to know the difference between 1080ti’s inference speed vs Jetson Tx2.

I have successfully compiled the frozen graphs into uff, but I run inference, I get the error:
Assertion `size >= bsize && "Mismatch between allocated memory size and expected size of serialized engine.

Running mobilenet_v1_0p5_160

testConfig:
imagePath: data/images/gordon_setter.jpg
planPath: data/plans/mobilenet_v1_0p5_160.plan
inputNodeName: input
inputHeight: 160
inputWidth: 160
outputNodeName: MobilenetV1/Logits/SpatialSqueeze
numOutputCategories: 1001
preprocessFnName: preprocess_inception
numRuns: 50
dataType: half
maxBatchSize: 1
workspaceSize: 1048576
useMappedMemory: 0
statsPath: data/test_output_trt.txt

test_trt: cudnnEngine.cpp:640: bool nvinfer1::cudnn::Engine::deserialize(const void*, std::size_t, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.
Running mobilenet_v1_0p25_128

testConfig:
imagePath: data/images/gordon_setter.jpg
planPath: data/plans/mobilenet_v1_0p25_128.plan
inputNodeName: input
inputHeight: 128
inputWidth: 128
outputNodeName: MobilenetV1/Logits/SpatialSqueeze
numOutputCategories: 1001
preprocessFnName: preprocess_inception
numRuns: 50
dataType: half
maxBatchSize: 1
workspaceSize: 1048576
useMappedMemory: 0
statsPath: data/test_output_trt.txt

test_trt: cudnnEngine.cpp:640: bool nvinfer1::cudnn::Engine::deserialize(const void*, std::size_t, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.

Any help or guidance to what I should try would be greatly appreciated :)

Hi,

A possible cause is the incompatible packages.
The UFF and TensorFlow package for Jetson and pc are different.

Please install uff desktop package from our website:
https://developer.nvidia.com/nvidia-tensorrt-download

Thanks.