Clarification on ReIdentificationNet model structure on handling embeddings

How the vector embeddings generated for an object is stored by the model to identify the same object appears again in a scenario?

I could not find anything about storage handling, whether it is a vector database or something.

You can try to run ReID notebook as started. tao_tutorials/notebooks/tao_launcher_starter_kit/re_identification_net at d7ff531031564d58d63e586cd8da554e11c252e7 · NVIDIA/tao_tutorials · GitHub
ReIdentificationNet takes cropped images of a person from different perspectives as network input and outputs the embedding features for that person. More info can be found in ReIdentificationNet - NVIDIA Docs or source code tao_pytorch_backend/nvidia_tao_pytorch/cv/re_identification at 99e0a38a0d3ac00997c41c7e6ea6f02c6586bf4f · NVIDIA/tao_pytorch_backend · GitHub.

This is just for the initial investigation, Can I get the insight on how and where the embeddings are stored and retrieved when the inference happens?

You can take a look at ReIdentificationNet - NVIDIA Docs.

I looked there and I found this, mentioned about JSON file. Is this any how relevant?

Yes, the embedding feature is stored in the output json file.

So, this file will be serve as a vector store, right?

Just to clarify.

Yes. You can also use TAO Toolkit Triton Apps to run end-to-end inference. The CMC curve is also printed. Refer to tao-toolkit-triton-apps/scripts/re_id_e2e_inference/plot_e2e_inference.py at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub as well.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.