&&&& RUNNING TensorRT.sample_onnx_mnist # ./sample_onnx_mnist
[01/26/2021-13:20:18] [I] Building and running a GPU inference engine for Onnx MNIST
Input filename: …/data/mnist/mnist.onnx
ONNX IR version: 0.0.3
Opset version: 8
Producer name: CNTK
Producer version: 2.5.1
Model version: 1
[01/26/2021-13:20:22] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/26/2021-13:20:39] [I] [TRT] Detected 1 inputs and 1 output network tensors.
Could not find 8.pgm in data directories:
What am I supposed to do?