Problems with Jetson-inference

I am trying for the first time to run the object detection program from the jetson-inference library (coding your own object detection program) with an Nvidia Jetson nano and a Logitech C270 HD Webcam, and I am really struggling to make it work. I have several issues:
First, when I try to execute “docker/” in the jetson-inference folder, I get the following error:
““docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown.””

Does anyone know how to fix this?

A second problem I am having is when I try to run the Python code shown in [jetson-inference/ at master · dusty-nv/jetson-inference · GitHub]. When I do it, it seems like everything is working except the camera display, which at the beginning it says “nvbuf_utils: Could not get EGL display connection”. I already have the newest version of gstreamer, if that has something to do with it.

How could I solve this too?

Any type of help will be really appreciated.

Thank you.


The docker error is usually caused by the incompatible OS between the container and native Jetson.
Could you share which JetPack you use?
If you are not using JetPack 4.6.2, would you mind giving it a try?

For the EGL issue, please try to set up the DISPLAY global variable to see if it works.

$ export DISPLAY=:0  # or :1


Hi @edgarvivas2003, this issue happens if your containerd package got upgraded (i.e. during an apt-upgrade) and is unrelated to the jetson-inference container itself. Can you try one of these suggestions to fix your docker runtime?

Do you have a display actually connected to your Jetson, or are you connected over SSH?

Hi @dusty_nv , I will definitely try the docker suggestions.
I have my laptop connected to the jetson nano and displaying using NoMachine, so that I can see the actual Jetson nano screen.

Thank you for your help!

OK, the issue is that jetson-inference/jetson-utils uses OpenGL+CUDA interoperability for displaying the video on-screen, so it’s GUI doesn’t work over tunneled connections or headless remote desktop.

To view the video feed from a PC, I recommend that you send it over RTP:

Hi @dusty_nv ,
Everything turned out working thanks to you. Now I would like to train my own model. I already collected the data, but it cannot train it. I checked and the error is that I do not have test.txt and val.txt in the imagesets folder. How could I get them?

Thank you for your help.

Hi @edgarvivas2003, I would just copy your train.txt to test.txt and val.txt. Normally you would want to have independent train/val/test sets, but just for a personal model it’s fine.

Hi @dusty_nv , is there any way we can implement the data shown on the display to another code?

Thank you.

Hi @edgarvivas2003, I’m sorry I don’t understand your question - can you explain what you are trying to do?

You can add jetson-inference to your application or add your code to the jetson-inference samples.

I am trying to make an articulated camera that points automatically to some specific objects, so I would need to get something like the coordinates from the display.

OK gotcha - the detectNet.Detect() function returns a list of detectNet.Detection objects:

Detection = <type 'jetson.inference.detectNet.Detection'>
Object Detection Result
Data descriptors defined here:
    Area of bounding box
    Bottom bounding box coordinate
    Center (x,y) coordinate of bounding box
    Class index of the detected object
    Confidence value of the detected object
    Height of bounding box
    Instance index of the detected object
    Left bounding box coordinate
    Right bounding box coordinate
    Top bounding box coordinate
     Width of bounding box

You can iterate through them like this:

detections = net.Detect(img)

for detection in detections:
    print(f'detected object -- ClassID={detection.ClassID}  Left={detection.Left}  Right={detection.Right}  Top={detection.Top}  Bottom={detection.Bottom})

Ok, thank you @dusty_nv , if I have any other doubts I will definitely ask you. You have helped me a lot.

No problem at all, happy to help! Good luck!

Hi @dusty_nv , I have another question. Would it be possible to run two types of inference simultaneously, like object detection and pose estimation for example?

Yes, you would just create a python script that loaded both of those networks at the beginning, and then in the camera processing loop called one after the other. You may want to make a copy of the input image or turn off the overlay function (i.e. to detectNet) so the poseNet operates on the original image.

Hi @dusty_nv , I am facing a new issue. I had a few problems with the training script. It is basically that I deleted some pictures and their respective annotations, but the program is still trained with those. Is there any way to untrain it?

And if I want to add more objects, do I have to delete the saved ones and restart it again or it automatically trains for the new ones?

Thank you in advance.


You would just re-train the model again with the training script again, then use the new model. If you use the same model dir, delete the .engine file from it so it gets re-generated when you run detectnet/ again.

You would just re-train on the whole dataset again, like you would do if you deleted some images. Technically you could create a separate dataset out of just your new images and use your previous model as the starting point, but I think it would be best just to re-train over the entire dataset again. You could try using the --resume or --pretrained-ssd arguments to to load your previous model to start training from (or just re-train from the beginning, which is what I normally do)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.