I’m following this guide: https://devblogs.nvidia.com/speed-up-inference-tensorrt/
My goal is to run Tensor RT C++ sample code on a Xavier as a proof of concept. I’m hitting this assertion: INTERNAL_ERROR: Assertion failed: eglCreateStreamKHR != nullptr
I’m running the following command given in the guide linked above.
./simpleOnnx_1 resnet50v2/resnet50v2.onnx resnet50v2/test_data_set_0/input_0.pb
Looking at the source code for this example, I’m seeing the assertion is happening inside IBuilder::buildCudaEngine()
ICudaEngine* createCudaEngine(string const& onnxModelPath, int batchSize)
{
unique_ptr<IBuilder, Destroy<IBuilder>> builder{createInferBuilder(gLogger)};
unique_ptr<INetworkDefinition, Destroy<INetworkDefinition>> network{builder->createNetwork()};
unique_ptr<nvonnxparser::IParser, Destroy<nvonnxparser::IParser>> parser{nvonnxparser::createParser(*network, gLogger)};
if (!parser->parseFromFile(onnxModelPath.c_str(), static_cast<int>(ILogger::Severity::kINFO)))
{
cout << "ERROR: could not parse input engine." << endl;
return nullptr;
}
return builder->buildCudaEngine(*network); // Build and return TensorRT engine.
}
Here’s the console output:
<snip>
INFO: Fusing (Unnamed Layer* 164) [Convolution] with (Unnamed Layer* 166) [Activation]
INFO: Fusing (Unnamed Layer* 167) [Convolution] with (Unnamed Layer* 168) [ElementWise]
INFO: Fusing (Unnamed Layer* 169) [Scale] with (Unnamed Layer* 170) [Activation]
INFO: Fusing (Unnamed Layer* 172) [Shuffle] with (Unnamed Layer* 173) [Shuffle]
INFO: After vertical fusions: 75 layers
INFO: After swap: 75 layers
INFO: After final dead-layer removal: 75 layers
INFO: After tensor merging: 75 layers
INFO: After concat removal: 75 layers
INFO: Graph construction and optimization completed in 0.0620393 seconds.
INTERNAL_ERROR: Assertion failed: eglCreateStreamKHR != nullptr
dla/eglUtils.cpp:56
Aborting...
Aborted
I have installed the following on this system:
apt-get install -y cuda-toolkit-10-0 libgomp1 libfreeimage-dev libopenmpi-dev openmpi-bin
dpkg -i libcudnn7_7.3.1.20-1+cuda10.0_arm64.deb
dpkg -i libcudnn7-dev_7.3.1.20-1+cuda10.0_arm64.deb
dpkg -i libnvinfer5_5.0.3-1+cuda10.0_arm64.deb
dpkg -i libnvinfer-dev_5.0.3-1+cuda10.0_arm64.deb
dpkg -i libnvinfer-samples_5.0.3-1+cuda10.0_all.deb
dpkg -i libgie-dev_5.0.3-1+cuda10.0_all.deb
dpkg -i tensorrt_5.0.3.2-1+cuda10.0_arm64.deb
dpkg -i libopencv_3.3.1_arm64.deb
dpkg -i libopencv-dev_3.3.1_arm64.deb
dpkg -i libopencv-python_3.3.1_arm64.deb
What is the reason for this assertion? Is there a way I can work around it? Thanks.