Now I`m trying to start this code on C++. I add optimization profile and have an error:
terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc
Aborted
I’ll send script in private message. Can you help please?
Environment
TensorRT Version : 7.2.3.4 GPU Type : GeForce GTX 1060 6 GB Nvidia Driver Version : 440.33.01 CUDA Version : 10.2 CUDNN Version : 7.1 Operating System + Version : Ubuntu 18.04 Python Version (if applicable) : 3.6 TensorFlow Version (if applicable) : 2.3.1
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
This looks like memory allocation issue. Please make sure memory is available. Your memory usage may be on the edge. That is exactly the error given when running out of memory. Try running it again and monitor using ps or top the memory usage. Let us know if you still face this issue.
Hi @spolisetty !
Thank you for support! I checked your advises but I have enough memory and issue is still present. Also I attach screenshots with available memory.
Thank you for sharing the steps and files for issue repro. At step of “sudo make clean && sudo make VERBOSE=TRUE” we are facing issues related to make config. Are you able to successfully run this step ?
Please let us know in case you did any changes later.
Yeah, I am able to run this step. Which issue do you have?
Probably you have troubles with OpenCV. For start project you should to install OpenCV.
Please share the issue trace.
I went through changes you’ve done in sampleOnnxMNIST.cpp
I think modifying this script may not be good idea it may lead errors. Based on our understanding, looks like you’re trying to build inference script using Tensorrt C++ api. We recommend you to please build a separate script. And also make sure TensorRT installed correctly.
Thank you for help.
I tried script described here and have some troubles. Now I should define optimization profile and explicit batch, but I have error
trt_sample.cpp:147:26: error: ‘NetworkDefinitionCreationFlag’ has not been declared static_cast(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH))};
Can you assist with it? Code in links you provided is different and library usage is also different.
I’ll send you script and ONNX via DM.