Error in the live execution of Thumbs task in free DLI course (sample programs)

I’ve jetpack 4.6 installed on my 2 GB jetson nano and I’ve interfacd Rasbperry pi V2 CSI Camera.

The issue which I am facing right now is the live execution of Thumbs task in free DLI course (sample programs).

Nano is working fine while taking samples of thumbs up and down Infact it is training the neural network perfectly.

But during Live execution for prediction purposes, it is unable to determine whether I am holding thumbs up or down.

I’ve been stuck to this matter for months ago rather I’ve run the same sample on my friends’ nano but I couldn’t find a remedy.

Will be waiting for beneficial responses.


Would you mind sharing the name of which DLI course you took first?


NVIDIA free DLI course,Getting started with AI on Jetson Nano

I have been following all steps of this MA’AM is starting with jetpack all the way to setting up the camera


When you run the detection, is the environment setting similar to the training stage?
For example, the lighting condition, camera type, …, etc.

Since this example is just trained with ~60 images, it may not able to recognize objects from a different setting.


We tried executing the detection in a similar environment to the training stage because it wasn’t recognizing any images in live execution, but it was in vain.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.


Would you mind sharing your training and testing examples as well as the model with us?
We want to check this further to see if anything goes wrong.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.