E] 10: Could not find any implementation for node

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Tesla T4
• DeepStream Version: 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): question

Hi, I’m encountering this error when attempting to convert my Dino_fan_large ONNX model to TensorRT(FP 32). I used this script for the conversion.

[05/31/2024-11:11:54] [TRT] [E] 10: Could not find any implementation for node {ForeignNode[/model/backbone/backbone.0/Constant_output_0.../model/backbone/backbone.0/body/blocks.0/mlp/mlp_v/dwconv/Transpose + /model/backbone/backbone.0/body/blocks.0/mlp/mlp_v/dwconv/Reshape]}.
[05/31/2024-11:11:55] [TRT] [E] 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/model/backbone/backbone.0/Constant_output_0.../model/backbone/backbone.0/body/blocks.0/mlp/mlp_v/dwconv/Transpose + /model/backbone/backbone.0/body/blocks.0/mlp/mlp_v/dwconv/Reshape]}.)
__enter__
Error executing job with overrides: []
Traceback (most recent call last):
  File "/opt/nvidia/deepstream/deepstream-6.4/test/tao_deploy/nvidia_tao_deploy/cv/common/decorators.py", line 63, in _func
    raise e
  File "/opt/nvidia/deepstream/deepstream-6.4/test/tao_deploy/nvidia_tao_deploy/cv/common/decorators.py", line 47, in _func
    runner(cfg, **kwargs)
  File "/opt/nvidia/deepstream/deepstream-6.4/test/tao_deploy/nvidia_tao_deploy//cv/dino/scripts/gen_trt_engine.py", line 106, in main
    builder.create_engine(
  File "/opt/nvidia/deepstream/deepstream-6.4/test/tao_deploy/nvidia_tao_deploy/cv/deformable_detr/engine_builder.py", line 173, in create_engine
    with self.builder.build_engine(self.network, self.config) as engine, \
AttributeError: __enter__

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Segmentation fault

This is the gen_trt_engine.yaml file I used for the conversion.

encryption_key: "???"
results_dir: "./"
dataset:
  num_classes: 91
  batch_size: 4
  augmentation:
    input_std: [0.229, 0.224, 0.225]
model:
  backbone: fan_large
  num_feature_levels: 4
  dec_layers: 6
  enc_layers: 6
  num_queries: 900
  dropout_ratio: 0.0
  dim_feedforward: 2048
gen_trt_engine:
  gpu_id: 0
  onnx_file: "/opt/nvidia/deepstream/deepstream-6.4/test/dino_model_v1.onnx"
  trt_engine: "/opt/nvidia/deepstream/deepstream-6.4/test/dino_model_v3.engine"
  input_channel: 3
  input_width: 960
  input_height: 544
  tensorrt:
    data_type: fp32
    workspace_size: 1024
    min_batch_size: 1
    opt_batch_size: 10
    max_batch_size: 10
    # calibration:
    #   cal_image_dir:
    #     - "???"
    #   cal_cache_file: "???"
    #   cal_batch_size: 10
    #   cal_batches: 1000

What could be causing this error and how can I fix it?

You can use tao deploy dino gen_trt_engine to generate engine. Refer to DINO with TAO Deploy - NVIDIA Docs.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.