Hello everyone,
I’m using deepstream docker container and I want to Inference one of the TAO purpose-built (pretrained) models.
so, I run deepstream with the following command:
docker run --device /dev/video0 --gpus ‘"‘device=0’"’ -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0-devel
and we enter the container, then I go to the following directory:
cd /samples/configs/tao_pretrained_models
after following the README file step by step , I run the following command to get inference for sanity check:
deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
it runs the model on the sample video file, everything is fine till here,
the problem is that I want to try this model (or any model in general) with my camera/webcam
I changed the config file (deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt) “[sources0]” as follows:
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0
gpu-id=0
but I get this error:
after running this command
v4l2-ctl --list-formats-ext
this is the result: