cudaErrorInvalidDeviceFunction: invalid device function

Hi. I’m using Orin NX 16GB.

When I tried to run TTS example from Paddlespeech, an error was thrown.

>>> from paddlespeech.cli.tts import TTSExecutor
/home/nvidia/.local/lib/python3.8/site-packages/setuptools/sandbox.py:13: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  import pkg_resources
/home/nvidia/.local/lib/python3.8/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('mpl_toolkits')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
  declare_namespace(pkg)
/home/nvidia/.local/lib/python3.8/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
  declare_namespace(pkg)
/home/nvidia/.local/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  dtype=np.complex,
/home/nvidia/.local/lib/python3.8/site-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils.
  warnings.warn("Setuptools is replacing distutils.")
>>> tts_executor = TTSExecutor()
>>> wav_file = tts_executor(
...     text="热烈欢迎您在 Discussions 中提交问题,并在 Issues 中指出发现的 bug。此外,我们非常希望您参与到 Paddle Speech 的开发中!",
...     output='output.wav',
...     am='fastspeech2_mix',
...     voc='hifigan_csmsc',
...     lang='mix',
...     spk_id=174)
[2023-08-11 10:02:31,659] [    INFO] - Already cached /home/nvidia/.paddlenlp/models/bert-base-chinese/bert-base-chinese-vocab.txt
[2023-08-11 10:02:31,673] [    INFO] - tokenizer config file saved in /home/nvidia/.paddlenlp/models/bert-base-chinese/tokenizer_config.json
[2023-08-11 10:02:31,673] [    INFO] - Special tokens file saved in /home/nvidia/.paddlenlp/models/bert-base-chinese/special_tokens_map.json
W0811 10:02:34.054011 2760973 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.7, Driver API Version: 11.4, Runtime API Version: 11.4
W0811 10:02:34.065198 2760973 gpu_resources.cc:91] device: 0, cuDNN Version: 8.6.
Building prefix dict from the default dictionary ...
[2023-08-11 10:02:37] [DEBUG] [__init__.py:113] Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
[2023-08-11 10:02:37] [DEBUG] [__init__.py:132] Loading model from cache /tmp/jieba.cache
Loading model cost 0.956 seconds.
[2023-08-11 10:02:38] [DEBUG] [__init__.py:164] Loading model cost 0.956 seconds.
Prefix dict has been built successfully.
[2023-08-11 10:02:38] [DEBUG] [__init__.py:166] Prefix dict has been built successfully.
terminate called after throwing an instance of 'thrust::system::system_error'
  what():  after determining tmp storage requirements for inclusive_scan: cudaErrorInvalidDeviceFunction: invalid device function
Aborted (core dumped)

Could you help analyse the cause of this? Thanks in advance.

Could you please help check?

I doubt this is the right forum to ask questions about paddlespeech. They have their own forum. Since you are using an Orin you could also ask on the Orin NX forum.

One possible source of an invalid device function in CUDA is using code that was not properly compiled for your GPU. For example you might be using an incorrect version of cuDNN. But there are other possibilities as well. I don’t have a list of all things it might be.

I won’t be able to respond to further questions on this topic. Good luck!