Camera issues with detectnet on jetson nano within docker container

I’m encountering an issue when running the detectnet script within a Docker container on my Jetson Nano as the final step for object detection with custom Dataset using Mobile net SSD via Jetson-inference .
The script seems to run, but my camera does not open. Additionally, when I use the camera-capture command, I get an error related to RGBA.
Previously while labelling I used camera capture it worked fine. Outside docker when I run other scripts which involves camera interaction they are working fine.
I am new jetson nano user please help me @dusty_nv

Environment:

  • Jetson Nano
  • JetPack version: 4.6
  • Camera : Logitech USB camera

Error Message while running camera-capture:
camera-capture: failed to capture RGBA image from camera
[gstreamer] gstCamera::Capture() – a timeout occurred waiting for the next image buffer.

Hi @sai.nithin124, are you able to run video-viewer /dev/video0 from inside and outside container ok? (presuming that /dev/video0 is your V4L2 device)

You might want to exit/restart container if your USB camera had recently become unplugged, because container needs to start after the device is plugged in.

If all else fails, you can run just the PyTorch training (train.py/train_ssd.py) inside container if need be.

Thank you @dusty_nv. I have tried running video-viewer /dev/video0 from inside docker, and it worked well.

So, I ran this command detectnet --model=models/products/ssd-mobilenet.onnx --labels=models/products/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes /dev/video0, and it took me almost 50 secs for the camera to start the live video and my detection is very bad.
Despite training my model with a small dataset of 50 labeled images (using camera-capture for image collection) for just one epoch, the detection accuracy is not satisfactory. I trained the model on various sizes of Maggie and Yippie packets, but the results are not accurate. Are there specific strategies or configurations that could enhance detection accuracy?

I encountered an error while attempting to capture more images using camera-capture for additional training data. The error message is as follows:

Thank you in advance for your time and assistance.

Sorry for the delay, @sai.nithin124 - are you able to use your camera again if you stop the container, run sudo systemctl restart nvargus-daemon.service, and then restart the container?

It’s normal for the first time you load a model with detectNet for it to take some minutes to load - it’s optimizing the model with TensorRT. On subsequent runs of that model, it will load faster, because it saves the optimized TensorRT engine to disk.

If you trained your model for just one epoch, you need to train it for more - like 30 epochs. And collect more data too.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.