jetson-inference and colorspaces

Hi @dusty_nv

I am using jetson-inference and have set it up to use my logitech c920, and I notice that the gstreamer settings for imagenet-camera look like this (I just changed the default size at this stage):

v4l2src device=/dev/video1 ! video/x-raw, width=(int)1920, height=(int)1080, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink

However I am using ros image_view via my c920 to acquire the photographs which I want to train a googlenet network on, and image_view appears to acquire images using the following camera object settings:

v4l2src device=/dev/video1 ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace

Then when I train, I am squashing the training images to 256x256 and leaving as color.

Is there any reason this difference in image settings would prevent imagenet-camera from working? If so is it possible to ask imagenet-camera to work in the ffmpegcolorspace? Or should I acquire the images with the same gst settings as imagenet-camera?

Thanks!

Hi,

Sorry that we are not familiar with the ffmpegcolorspace.

But it’s recommended to check the data representation between ffmpegcolorspace and videoconvert.
If there is no obvious difference, imagenet-camera should work fine.

Thanks.

Hi

Looking at the dusty_nv github instructions for training a new network on images, the only stipulation is that the shots should be JPG, PNG, TIFF or BMP, so I guess it does not matter what colorspace is used by gstreamer. I will persevere with my approach and fingers crossed, it will work!

Thanks!

That does not seem to have been correct. I have a trained model that went to 98% validation accuracy, but when downloading the caffe model and running through the c920 that was used to take the images, imagenet-camera cannot identify anything correctly. I have even tried testing out some of the actual training images with imagenet-console, which also do not work.

Apart from specifying the correct dev/video index, am I meant to be changing something within imagenet-camera or imagenet-console when not using images made with the onboard tx2?

Alternatively if you cannot answer this, what is the recommended means of acquiring images for training a new network when working with the C920 webcam, and training via nvidia DIGITS? The jetson-inference instructions state that this camera is tested with jetson-inference.

Track the following progress in the new topic:
[url]https://devtalk.nvidia.com/default/topic/1051852/jetson-tx2/please-advise-recommended-way-of-capturing-images-with-logitech-c920-camera-for-jetson-inference/[/url]