[GIE]cudnnScaleLayer.cpp (63) - Cuda Error in execute: 9

When I execute tensorRT engine after calling a library which use CUDA, CUDA Error is happened like below:
“[GIE]cudnnScaleLayer.cpp (63) - Cuda Error in execute: 9”

libraryA.functionA() // this function use CUDA

context->execute(1, inference_buffers.data()) //Execute TensorRT engine
//"[GIE]cudnnScaleLayer.cpp (63) - Cuda Error in execute: 9" is shown.

I’m using cmake modules in /usr/local/driveworks-0.6/samples/cmake, and I think that make problem.

Since using default ones in cmake("/usr/local/share/cmake-3.10/Modules/") instead “/usr/local/driveworks-0.6/samples/cmake” working fine.

Please let me know what makes this problem.

Thanks in advance.

Hi,

CUDA error 9 means invalid configuration.

Driveworks needs to be cross-compiled on an x86 environment.
Do you run the cmake command on an x86 desktop?

Thanks.

Yes, I build with cmake 3.10 on 16.04 64bit linux machine.

uname -a info is like below:
4.15.0-32-generic #35~16.04.1-Ubuntu SMP Fri Aug 10 21:54:34 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

This error is happen in both of host and target build.
And i suspect issue is happened using “/usr/local/driveworks-0.6/samples/cmake”, since for host build, if I use cmake default modules in “/usr/local/share/cmake-3.10/Modules/” issue is not happen.

Thanks.

Hi,

DriveWorks requires cmake version >= 3.3.

You can find this information in the NVIDIA_DriveWorks_DevGuide.pdf document:

Host System Prerequisites

• cmake version >= 3.3

Note:
By default, Ubuntu 14.04 installs cmake version 2.8. If you are
using that Ubuntu version, you must upgrade cmake to 3.3 or later.[/
-------------------------------------------------

Thanks.

Hi,

I used 3.10 (ten, not one point zero).

Thanks.

Hi,

Sorry for my misunderstanding.

Do you have more cmake log can share with us?
One possibility is that the FindCUDA and FindTensorRT is update-to-date in the cmake folder but not be the latest in driveworks.

Thanks.