ComputeAUC in config_validation.json doesn't work

Hi, I would like to compute two metrics: (1) the AUC with ComputeAUC and (2) ComputeMulticlassAverage in my config_validation.json using with the following snippet:
[{“name”: “ComputeAUC”,
“args”: {
“name”: “Average_AUC”,
“field”: “model”,
“label_field”: “label”,
“auc_average”: “macro”,
“report_path”: “{MMAR_EVAL_OUTPUT_PATH}”}},
{“name”: “ComputeMulticlassAverage”,
“args”: {
“name”: “MulticlassAverage”,
“field”: “model”,
“label_field”: “label”,
“report_path”: “{MMAR_EVAL_OUTPUT_PATH}”}}],

Unfortunately the following error occurs when running
Traceback (most recent call last):
File “/usr/lib/python3.6/”, line 193, in _run_module_as_main
main”, mod_spec)
File “/usr/lib/python3.6/”, line 85, in _run_code
exec(code, run_globals)
File “apps/”, line 36, in
File “apps/”, line 28, in main
File “utils/”, line 555, in evaluate_mmar
File “workflows/evaluators/”, line 393, in evaluate
File “components/metrics/”, line 117, in generate_report
File “libs/metrics/”, line 68, in generate_report
File “libs/metrics/”, line 54, in get
IndexError: index 1 is out of bounds for axis 1 with size 1

When removing computAUC, only keeping ComputeMulticlassAverage it works fine!
Would it also be possible to explain on how to best debug these issues instead of heavily relying on this forum? E.g.: how can I debug nvmidl.apps.evaluate?
Thank you!

It seems you have missed the class_index parameter in the computAUC as in here
I think you are on the right track debugging by isolating the issue.

Hi, I understand from NGC’s clara_xray_classification_chest_amp train.json “class_index” should be the index of the one-hot encoder (e.g. class_index = 2 for the data label = [0,0,1,0] ). What would that value be working with N-hot encoder?
Is see that “ComputMultiClassAverage” from " ai4med.components.metrics" has “label_index”, am I correct in saying that “ComputeAUC” can only work with 1-hot encoding and ComputMultiClassAverage with N-hot encoding?