using samepleINT8 and GEMM error

I have the similar problem as this when I run /usr/src/tensorrt/bin/sample_int8

https://devtalk.nvidia.com/default/topic/1011194/tensorrt2-and-int8-precision-inference-inside-docker-gemm-error/#

I found that the program stopped at context.enqueue() in the function doInference and I get this error:

‘ERROR LAUNCHING INT8-to-INT8 GEMM: 8’