Hi,
I am trying for the first time to run the object detection program from the jetson-inference library (coding your own object detection program) with an Nvidia Jetson nano and a Logitech C270 HD Webcam, and I am really struggling to make it work. I have several issues:
First, when I try to execute “docker/run.sh” in the jetson-inference folder, I get the following error:
““docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown.””
Does anyone know how to fix this?
A second problem I am having is when I try to run the Python code shown in [jetson-inference/detectnet-example-2.md at master · dusty-nv/jetson-inference · GitHub]. When I do it, it seems like everything is working except the camera display, which at the beginning it says “nvbuf_utils: Could not get EGL display connection”. I already have the newest version of gstreamer, if that has something to do with it.
The docker error is usually caused by the incompatible OS between the container and native Jetson.
Could you share which JetPack you use?
If you are not using JetPack 4.6.2, would you mind giving it a try?
For the EGL issue, please try to set up the DISPLAY global variable to see if it works.
Hi @edgarvivas2003, this issue happens if your containerd package got upgraded (i.e. during an apt-upgrade) and is unrelated to the jetson-inference container itself. Can you try one of these suggestions to fix your docker runtime?
Hi @dusty_nv , I will definitely try the docker suggestions.
I have my laptop connected to the jetson nano and displaying using NoMachine, so that I can see the actual Jetson nano screen.
OK, the issue is that jetson-inference/jetson-utils uses OpenGL+CUDA interoperability for displaying the video on-screen, so it’s GUI doesn’t work over tunneled connections or headless remote desktop.
Hi @dusty_nv ,
Everything turned out working thanks to you. Now I would like to train my own model. I already collected the data, but it cannot train it. I checked and the error is that I do not have test.txt and val.txt in the imagesets folder. How could I get them?
Hi @edgarvivas2003, I would just copy your train.txt to test.txt and val.txt. Normally you would want to have independent train/val/test sets, but just for a personal model it’s fine.
I am trying to make an articulated camera that points automatically to some specific objects, so I would need to get something like the coordinates from the display.
OK gotcha - the detectNet.Detect() function returns a list of detectNet.Detection objects:
Detection = <type 'jetson.inference.detectNet.Detection'>
Object Detection Result
----------------------------------------------------------------------
Data descriptors defined here:
Area
Area of bounding box
Bottom
Bottom bounding box coordinate
Center
Center (x,y) coordinate of bounding box
ClassID
Class index of the detected object
Confidence
Confidence value of the detected object
Height
Height of bounding box
Instance
Instance index of the detected object
Left
Left bounding box coordinate
Right
Right bounding box coordinate
Top
Top bounding box coordinate
Width
Width of bounding box
You can iterate through them like this:
detections = net.Detect(img)
for detection in detections:
print(f'detected object -- ClassID={detection.ClassID} Left={detection.Left} Right={detection.Right} Top={detection.Top} Bottom={detection.Bottom})
Hi @dusty_nv , I have another question. Would it be possible to run two types of inference simultaneously, like object detection and pose estimation for example?
Yes, you would just create a python script that loaded both of those networks at the beginning, and then in the camera processing loop called one after the other. You may want to make a copy of the input image or turn off the overlay function (i.e. to detectNet) so the poseNet operates on the original image.
Hi @dusty_nv , I am facing a new issue. I had a few problems with the training script. It is basically that I deleted some pictures and their respective annotations, but the program is still trained with those. Is there any way to untrain it?
And if I want to add more objects, do I have to delete the saved ones and restart it again or it automatically trains for the new ones?
You would just re-train the model again with the training script again, then use the new model. If you use the same model dir, delete the .engine file from it so it gets re-generated when you run detectnet/detectnet.py again.
You would just re-train on the whole dataset again, like you would do if you deleted some images. Technically you could create a separate dataset out of just your new images and use your previous model as the starting point, but I think it would be best just to re-train over the entire dataset again. You could try using the --resume or --pretrained-ssd arguments to train_ssd.py to load your previous model to start training from (or just re-train from the beginning, which is what I normally do)