Cannot use model-analyzer on ONNX classification model with dynamic input

• RTX 3060
• DeepStream Version 6.4 - Docker
• Cannot run model-analyzer on a model

I am currently profiling several models with model-analyzer.

I can’t manage to do this with one of my models and I’d like to have more information about the error encountered …

Here is the error message:

[Model Analyzer] Initializing GPUDevice handles
[Model Analyzer] Using GPU 0 NVIDIA GeForce RTX 3060 Laptop GPU with UUID GPU-87703e76-5ffe-5cde-d056-3c70fa64251a
[Model Analyzer] Starting a Triton Server using docker
[Model Analyzer] Loaded checkpoint from file /workspace/checkpoints/2.ckpt
[Model Analyzer] GPU devices match checkpoint - skipping server metric acquisition
[Model Analyzer] Starting a Triton Server using docker
[Model Analyzer] 
[Model Analyzer] Starting automatic brute search
[Model Analyzer] 
[Model Analyzer] Creating model config: age_config_default
[Model Analyzer] 
[Model Analyzer] Profiling age_config_default: client batch size=1, concurrency=1
[Model Analyzer] Running perf_analyzer failed with exit status 99:
error: Failed to init manager inputs: input input contains dynamic shape, provide shapes to send along with the request


[Model Analyzer] Saved checkpoint to /workspace/checkpoints/3.ckpt
[Model Analyzer] Creating model config: age_config_0
[Model Analyzer]   Setting instance_group to [{'count': 1, 'kind': 'KIND_GPU'}]
[Model Analyzer] 
[Model Analyzer] Profiling age_config_0: client batch size=1, concurrency=1
[Model Analyzer] Running perf_analyzer failed with exit status 99:
error: Failed to init manager inputs: input input contains dynamic shape, provide shapes to send along with the request


[Model Analyzer] No changes made to analyzer data, no checkpoint saved.
Traceback (most recent call last):
  File "/usr/local/bin/model-analyzer", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/model_analyzer/entrypoint.py", line 278, in main
    analyzer.profile(
  File "/usr/local/lib/python3.10/dist-packages/model_analyzer/analyzer.py", line 124, in profile
    self._profile_models()
  File "/usr/local/lib/python3.10/dist-packages/model_analyzer/analyzer.py", line 242, in _profile_models
    self._model_manager.run_models(models=[model])
  File "/usr/local/lib/python3.10/dist-packages/model_analyzer/model_manager.py", line 145, in run_models
    self._stop_ma_if_no_valid_measurement_threshold_reached()
  File "/usr/local/lib/python3.10/dist-packages/model_analyzer/model_manager.py", line 239, in _stop_ma_if_no_valid_measurement_threshold_reached
    raise TritonModelAnalyzerException(
model_analyzer.model_analyzer_exceptions.TritonModelAnalyzerException: The first 2 attempts to acquire measurements have failed. Please examine the Tritonserver/PA error logs to determine what has gone wrong.

Here ares the steps to reproduce:

Clone that repo and go to that directory:

Start Triton container
docker run -it --gpus all -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start --net=host nvcr.io/nvidia/tritonserver:24.04-py3-sdk

Add this folder in the model repository:
age.zip (21.3 MB)

Run model analyis:
model-analyzer profile --model-repository /YOUR_PATH/examples/quick-start/ --profile-models age --triton-launch-mode=docker --output-model-repository-path /opt/output_dir --export-path profile_results

Thanks

Seems this is a Triton Server related issue while your topic title is about DeepStream. Please raise the issue to Issues · triton-inference-server/server · GitHub