Inconsistent output from nemoretriever-table-structure-v1 docker image compared to API

I was trying to get a benchmark of the nemoretriever-table-structure-v1 model using the docker image. However I got very conflicting responses when I tried it, even with the demo images provided on the model’s page.

2 issues were obvious, the quality of the row and columns detected (and their confidence). And weirdly, there being no “cell” key under the response.

For example here’s the demo output (API):

But this was the output I got from the docker image (blue=row, green=column):

While I was running the docker image on a 3060 I am not sure we would get these wildly varying results with same image input.

I would appreciate some help on what might be causing this. And how we can match the API output or get close to it while using the docker image.

EDIT: I used the huggingface model and post-processing script and that seemed to work fine. I am guessing docker lacks those steps. Would be nicer if docs mentioned it instead. Since when you use the steps under “Deploy” tab it is completely useless compared to cloud API.

Hi MbatuhanC,

Thanks for posting on the forums!

Glad you were able to find a solution.

Best,

Aharpster