I am Newbee to this environment .I want to train and test a model using nvidia deepstream for ocr (Seven Segment Values Recognition in Digital Metres) .But its hard to find a way to know the correct path for me .I want to train the model in Google colab and want to infer in windows platform beacuse GPU is not avialable .Also i want some clarity also regarding select the model and steps to train that model in Google colab.Could any one guide me the right path
i found https://developer.nvidia.com/blog/create-custom-character-detection-and-recognition-models-with-nvidia-tao-part-1/
this one helpful for me but i little bit confused about the structure of dataset .
In orignal doc
Thanks for replying
For seven segment detection lprnet and
ocrnet which one is best
Also in Google colab, directly I can execute the commands that are in above link any setup is required for Tao toolkit
# Convert the raw train dataset to lmdb
print("Converting the training set to LMDB.")
!tao model ocrnet dataset_convert -e $SPECS_DIR/experiment.yaml \
dataset_convert.input_img_dir=$DATA_DIR/train \
dataset_convert.gt_file=$DATA_DIR/train/gt_new.txt \
dataset_convert.results_dir=$DATA_DIR/train/lmdb
Converting the training set to LMDB.
Traceback (most recent call last):
File "/usr/local/bin/tao", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_cli/entrypoint/tao_launcher.py", line 134, in main
instance.launch_command(
File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_cli/components/instance_handler/local_instance.py", line 356, in launch_command
docker_logged_in(required_registry=task_map[task].docker_registry)
File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_cli/components/instance_handler/utils.py", line 151, in docker_logged_in
data = load_config_file(docker_config)
File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_cli/components/instance_handler/utils.py", line 84, in load_config_file
assert os.path.exists(config_path), (
AssertionError: Config path must be a valid unix path. No file found at: /root/.docker/config.json. Did you run docker login?
For drawing bounding boxes for images, you can use OCDNet. OCDNet will detect all the characters in the image. In this approach, it means you will train OCDNet and OCRNet, then run inference as mentioned in the blog.
Instead, for 2nd approach, if you have or will generate dataset for the coordinate of the display panel in Digital Metres, actually you can also use detection network(like YOLOv4) to detect where is the display panel which looks like the yellow bbox you shared. After that, you can run lprnet or OCRnet to recognize the digital values.
really the scenario actually i am going through is difficult for me …
can you please guide me along the second approach to detect seven segment in digital metres
I mean what steps i have to follow ?
I have 200 images of digital metre display.
To detect seven segment in digital metres, this is an object detection task. You can use YOLOv4 network.
You can label your 200 images. For example, labelme or other tool.
YOLOv4 expects label format mentioned in Data Annotation Format - NVIDIA Docs. For example,
User can use the tao launcher mentioned in the notebook.
Or you can run docker run.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
Then inside the docker, run commands without tao in the beginning. # yolo_v4 train xxx
I have trained yolov5 on my csutom data in google colab without usage of nvidia through ultralytics yolov5 notebook .i am getting bounding boxes around seven segment values
Now i want to detect the numbers in bounding boxes using tao toolkit in colab note:i have best.pt and last.pt files in my hands
Thank you