How to build an ineractive UI on top of Nvidia TAO trained model that is in ".hdf5" format

My aim is to use the yolo_v4_object_detection model that is trained in TAO framework and saved in “.hdf5” format to do object detection.
It is giving good results on inferencing inside jupyter notebook.
How can I use one interactive UI to input one image of my choice and
the model will perform object detection and give us back the image with
bounding boxes , class name and confidence score.
Is there any dependencies on TAO framework ?
do I have to use Nvidia platform only, or , I can use any platform.
Basically how do I transfer my model from TAO framework and
integrate it into an interactive UI outside TAO ?
Can I use it directly in “.hdf5” format or do I have to convert it to any other format ?

In summary, I need guidance on converting the model to a compatible format and building an interactive UI to deploy it outside the TAO framework.

Yes, you can. The topic becomes “deploy keras model in interactive UI”. You can find some guides after searching .

Yes, you can run outside Nvidia platform. “deploy onnx model in interactive UI”.

You can run with onnx format. The topic becomes “deploy onnx model in interactive UI”. You can also find some guides after searching .

I am using the model in “.onnx” format.
I am getting error :
Traceback (most recent call last): File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/yolo_v4/scripts/inference.py”, line 252, in main() File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/common/utils.py”, line 717, in return_func raise e File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/common/utils.py”, line 705, in return_func return func(*args, **kwargs) File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/yolo_v4/scripts/inference.py”, line 248, in main raise e File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/yolo_v4/scripts/inference.py”, line 236, in main inference(args) File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/yolo_v4/scripts/inference.py”, line 126, in inference os.makedirs(arguments.results_dir) File “/usr/lib/python3.8/os.py”, line 213, in makedirs makedirs(head, exist_ok=exist_ok) File “/usr/lib/python3.8/os.py”, line 213, in makedirs makedirs(head, exist_ok=exist_ok) File “/usr/lib/python3.8/os.py”, line 213, in makedirs makedirs(head, exist_ok=exist_ok) [Previous line repeated 5 more times] File “/usr/lib/python3.8/os.py”, line 223, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: ‘/home/ubuntu’ Execution status: FAIL

Subprocess error: 2024-06-14 18:28:09,737 [TAO Toolkit] [INFO] root 160: Registry: [‘nvcr.io’] 2024-06-14 18:28:09,804 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 361: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 2024-06-14 18:28:09,823 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True 2024-06-14 18:28:20,257 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 363: Stopping container.

Inference completed successfully!

Output image not found.

Failed to generate inference result.

Could you check the access? Please check the ~/.tao_mounts.json.

  1. To check the mapping.
  2. Or try to remove
    “DockerOptions”: {
    “user”: “1000:1000”
    }

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks