TensorRT 4.0.1.6 occurs some errors on GTX1080Ti

I ran the sample tests, such that …
googlenet

$~/opt/TensorRT-4.0.1.6/bin$ ./sample_googlenet 
Building and running a GPU inference engine for GoogleNet, N=4...
*** Error in `./sample_googlenet': free(): invalid next size (fast): 0x0000000024ad2e90 ***

sample_int8

$./sample_int8 mnist 

FP32 run:400 batches of size 100 starting at 100
*** Error in `./sample_int8': free(): invalid next size (fast): 0x00000000082edfc0 ***

sample_char_rnn

$ ./sample_char_rnn
ERROR: cudnnRNNBaseLayer.cpp (308) - Cuda Error in RNNDescriptorState: 3
ERROR: cudnnRNNBaseLayer.cpp (308) - Cuda Error in RNNDescriptorState: 3
sample_char_rnn: sampleCharRNN.cpp:409: void APIToModel(std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>&, nvinfer1::IHostMemory**): Assertion `engine != nullptr' failed.
Aborted (core dumped)

trtexec

$ ./trtexec --deploy=/home/franksai/opt/TensorRT-4.0.1.6/data/mnist/mnist.prototxt --output=prob           
deploy: /home/franksai/opt/TensorRT-4.0.1.6/data/mnist/mnist.prototxt
output: prob
Input "data": 1x28x28
Output "prob": 10x1x1
*** Error in `./trtexec_debug': free(): invalid next size (fast): 0x0000000009169160 ***


My environment

Device: GTX1080 Ti
OS: Ubuntu 16.04
CUDA: 8.0
TensorRT: TensorRT-4.0.1.6

Hi~

have you solved your problem? I have the same situation...