jetson-inference

Hello
i follow this tuto GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson..
And when i run ./detectnet with my nvidia xavier card and logitech c930e i have this error.

detectnet-camera: failed to capture frame
detectnet-camera: failed to convert from NV12 to RGBA
detectNet::Detect( 0x(nil), 1280, 720 ) → invalid parameters
[cuda] cudaNormalizeRGBA((float4*)imgRGBA, make_float2(0.0f, 255.0f), (float4*)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight())
[cuda] invalid device pointer (error 17) (hex 0x11)
[cuda] /home/nvidia/jetson-inference/detectnet-camera/detectnet-camera.cpp:247
help please !!

Hi mohamedabdallah.enicar,

Please change the DEFAULT_CAMERA define at the top of detectnet-camera.cpp to reflect the /dev/video V4L2 device of your USB camera and recompile.

See also: [url]https://devtalk.nvidia.com/default/topic/1030672/jetson-tx2/change-camera-on-jetson-tx2/post/5243917/#5243917[/url]

Thanks

Hello NVIDIA, I tried to transfer learning with Pytorch, and I tried the course that have been given by the developer (Here is the link to the course: jetson-inference/pytorch-cat-dog.md at master · dusty-nv/jetson-inference · GitHub).

I faced some problem. Developer has given some advice and solution and I have tried it also, but I still face the same error. Could NVIDIA help me?

Here is the error:
jetson.inference.init.py
jetson.inference – initializing Python 2.7 bindings…
jetson.inference – registering module types…
jetson.inference – done registering module types
jetson.inference – done Python 2.7 binding initialization
jetson.utils.init.py
jetson.utils – initializing Python 2.7 bindings…
jetson.utils – registering module functions…
jetson.utils – done registering module functions
jetson.utils – registering module types…
jetson.utils – done registering module types
jetson.utils – done Python 2.7 binding initialization
[image] loaded ‘/home/krsbi/datasets/cat_dog/test/dog/01.jpg’ (500 x 375, 3 channels)
jetson.inference – PyTensorNet_New()
jetson.inference – PyImageNet_Init()
jetson.inference – imageNet loading network using argv command line params
jetson.inference – imageNet.init() argv[0] = ‘–model=cat_dog/resnet18.onnx’
jetson.inference – imageNet.init() argv[1] = ‘–input_blob=input_0’
jetson.inference – imageNet.init() argv[2] = ‘–output_blob=output_0’
jetson.inference – imageNet.init() argv[3] = ‘–labels=~/datasets/cat_dog/labels.txt’

imageNet – loading classification network model from:
– prototxt (null)
– model cat_dog/resnet18.onnx
– class_labels ~/datasets/cat_dog/labels.txt
– input_blob ‘input_0’
– output_blob ‘output_0’
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file cat_dog/resnet18.onnx.1.1.GPU.FP16.engine
[TRT] loading network profile from engine cache… cat_dog/resnet18.onnx.1.1.GPU.FP16.engine
[TRT] device GPU, cat_dog/resnet18.onnx loaded
[TRT] device GPU, CUDA engine context initialized with 2 bindings
[TRT] binding – index 0
– name ‘input_0’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3 (CHANNEL)
– dim #1 224 (SPATIAL)
– dim #2 224 (SPATIAL)
[TRT] binding – index 1
– name ‘output_0’
– type FP32
– in/out OUTPUT
– # dims 1
[TRT] warning – unknown nvinfer1::DimensionType (127)
– dim #0 2 (UNKNOWN)
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 output_0 binding index: 1
[TRT] binding to output 0 output_0 dims (b=1 c=2 h=1 w=1) size=8
device GPU, cat_dog/resnet18.onnx initialized.
[TRT] cat_dog/resnet18.onnx loaded
imageNet – failed to find ~/datasets/cat_dog/labels.txt
imageNet – failed to load synset class descriptions (0 / 0 of 2)
[TRT] imageNet – failed to initialize.
jetson.inference – imageNet failed to load built-in network ‘googlenet’
PyTensorNet_Dealloc()
Traceback (most recent call last):
File “imagenet-console.py”, line 53, in
net = jetson.inference.imageNet(opt.network, argv)
Exception: jetson.inference – imageNet failed to load network
jetson.utils – freeing CUDA mapped memory

Hi m.billson16, the relevant error from your log is the following:

imageNet -- failed to find ~/datasets/cat_dog/labels.txt
imageNet -- failed to load synset class descriptions (0 / 0 of 2)

It means that it couldn’t find/load your labels.txt file. Can you check that the path given to --labels argument is correct?

If the path is correct, you might want to try specifying the full path instead of using ~ shortcut (i.e. /home//datasets/labels.txt)

Thank you dusty_nv for the advice and assist. It works well now. Thank you so much.