Using jetson nano i conducted a training of my own model for object detection with the help of trained model but it shows an error and it is below

manpreet@manpreet-desktop:~/jetson-inference/python/training/detection/ssd$ python3 train_ssd.py --dataset-type=voc --data=jetson-inference/myTrain2 --model-dir=mymodel2 --batch-size=2 --num-workers=1 --epochs=1
2022-06-08 18:15:13 - Using CUDA…
2022-06-08 18:15:13 - Namespace(balance_data=False, base_net=None, base_net_lr=0.001, batch_size=2, checkpoint_folder=‘mymodel2’, dataset_type=‘voc’, datasets=[‘jetson-inference/myTrain2’], debug_steps=10, extra_layers_lr=None, freeze_base_net=False, freeze_net=False, gamma=0.1, lr=0.01, mb2_width_mult=1.0, milestones=‘80,100’, momentum=0.9, net=‘mb1-ssd’, num_epochs=1, num_workers=1, pretrained_ssd=‘models/mobilenet-v1-ssd-mp-0_675.pth’, resume=None, scheduler=‘cosine’, t_max=100, use_cuda=True, validation_epochs=1, weight_decay=0.0005)
2022-06-08 18:15:13 - Prepare training datasets.
Segmentation fault (core dumped)

please verify it and kindly tell some curies.

Hi,

Welcome to the NVIDIA Developer forums. This needs to be posted in the Jetson Nano category for support to have visibility, I am moving it over for you.

Hi @jino.joy.m, are you able to run the following command, or do you get the same segmentation fault?

$ python3 -c 'import numpy'

If so, can you try this fix? https://github.com/numpy/numpy/issues/18131#issuecomment-756140369

Yes I’m able to run the command

Hmm okay. Are you able to run the other train_ssd.py training examples from the jetson-inference tutorial, or does this error just occur with your myTrain2 dataset?

Are you able to try the jetson-inference container to see if you get this error while running train_ssd.py from within the container?

thank you. it actually worked when run within the container and detected the items.

But I faced another issue regarding exporting model to onnx
The program is as follows
root@manpreet-desktop:/jetson-inference/python/training/detection/ssd# python3 onnx_export.py --model-dir=models/items2
Namespace(batch_size=1, height=300, input=’’, labels=‘labels.txt’, model_dir=‘models/items2’, net=‘ssd-mobilenet’, output=’’, width=300)
running on device cuda:0
found best checkpoint with loss 10000.000000 ()
creating network: ssd-mobilenet
num classes: 4
loading checkpoint: models/items2/
Traceback (most recent call last):
File “onnx_export.py”, line 86, in
net.load(args.input)
File “/jetson-inference/python/training/detection/ssd/vision/ssd/ssd.py”, line 135, in load
self.load_state_dict(torch.load(model, map_location=lambda storage, loc: storage))
File “/usr/local/lib/python3.6/dist-packages/torch/serialization.py”, line 594, in load
with _open_file_like(f, ‘rb’) as opened_file:
File “/usr/local/lib/python3.6/dist-packages/torch/serialization.py”, line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File “/usr/local/lib/python3.6/dist-packages/torch/serialization.py”, line 211, in init
super(_open_file, self).init(open(name, mode))
IsADirectoryError: [Errno 21] Is a directory: ‘models/items2/’

This is another model example not the old one.old one worked actually

Hi @jino.joy.m, can you do ls models/items2 ? Does it have .pth files in it?

root@manpreet-desktop:/jetson-inference/python/training/detection/ssd# ls models/items2
labels.txt mb1-ssd-Epoch-0-Loss-nan.pth

This is the result

Is there any problem. Can you tell any solution for this… Kindly please reply sir.

OK, since you trained for only one epoch and it made a model with an invalid loss (nan), you need to manually specify the --input and --output arguments to onnx_export.py (https://github.com/dusty-nv/pytorch-ssd/blob/3f9ba554e33260c8c493a927d7c4fdaa3f388e72/onnx_export.py#L20

python3 onnx_export.py --input=models/items2/mb1-ssd-Epoch-0-Loss-nan.pth --output=models/items2/ssd-mobilenet.onnx

However, since the model’s loss is NaN, it means that it wasn’t trained to convergence and may not work well for detecting objects.

root@manpreet-desktop:/jetson-inference/python/training/detection/ssd# python3 onnx_export.py --input=models/items2/mb1-ssd-Epoch-0-Loss-nan.pth --output=models/items2/ssd-mobilenet.onnx
Namespace(batch_size=1, height=300, input=‘models/items2/mb1-ssd-Epoch-0-Loss-nan.pth’, labels=‘labels.txt’, model_dir=’’, net=‘ssd-mobilenet’, output=‘models/items2/ssd-mobilenet.onnx’, width=300)
running on device cuda:0
Traceback (most recent call last):
File “onnx_export.py”, line 62, in
class_names = [name.strip() for name in open(args.labels).readlines()]
FileNotFoundError: [Errno 2] No such file or directory: ‘labels.txt’

This was the result. Is there any remedy

Sorry, please also specify the --labels option like so:

$ python3 onnx_export.py --input=models/items2/mb1-ssd-Epoch-0-Loss-nan.pth --labels=models/items2/labels.txt --output=models/items2/ssd-mobilenet.onnx

Thank you sir. I got this. could you suggest any method to find the centre coordinates of the bounding boxes in the above detection program or any detection program. Kindly please reply.

Each detection result in the list of detections returned from net.Detect() has the following members, including the center coordinate:

Detection = <type 'jetson.inference.detectNet.Detection'>
Object Detection Result
 
----------------------------------------------------------------------
Data descriptors defined here:
 
Area
    Area of bounding box
 
Bottom
    Bottom bounding box coordinate
 
Center
    Center (x,y) coordinate of bounding box
 
ClassID
    Class index of the detected object
 
Confidence
    Confidence value of the detected object
 
Height
    Height of bounding box
 
Instance
    Instance index of the detected object
 
Left
    Left bounding box coordinate
 
Right
    Right bounding box coordinate
 
Top
    Top bounding box coordinate
 
Width
     Width of bounding box

thank you sir .I got it. Sir can you tell any method to delay the frame for 2 or 3 seconds after we get one object detected by bounding box in the above program.

Do you mean store the frames of the detected objects? You can see this detectnet-snap.py example that saves the images of the detected objects: https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/detectnet-snap.py

sir,
I have to run a servo using jetson nano. so for that I installed adda fruit servo library and all. but when running the program this error comes. could you please tell any solution.
from adafruit_servokit import ServoKit
myKit=ServoKit(channels=16)
myKit.Servo[0].angle=90

top/pyPro/jino joy/Servo.py"Desktop/pyPro$ /usr/bin/python3 "/home/manpreet/Deskt
Traceback (most recent call last):
File “/home/manpreet/Desktop/pyPro/jino joy/Servo.py”, line 1, in
from adafruit_servokit import ServoKit
File “/usr/local/lib/python3.6/dist-packages/adafruit_servokit.py”, line 35, in
import board
File “/usr/local/lib/python3.6/dist-packages/board.py”, line 39, in
import adafruit_platformdetect.constants.boards as ap_board
File “/usr/local/lib/python3.6/dist-packages/adafruit_platformdetect/init.py”, line 10, in
from .board import Board
File “/usr/local/lib/python3.6/dist-packages/adafruit_platformdetect/board.py”, line 24
from future import annotations
^
SyntaxError: future feature annotations is not defined

Kindly please reply.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.