Exception: jetson.utils -- failed to create videoSource device

Hi @karishmathumu, due to the incomplete output from v4l2-ctl (there are no output formats listed), I think there is an underlying issue with your USB camera and/or it’s driver. Are you able to try a different USB camera?

I see. Not yet, i will try to get another one and see if that works and will keep you informed.

Thank you very much

Regards,
Karishma

Hai Sir,

I have tired this

sudo apt install ffmpeg
ffplay /dev/video0

And the result the webcam is showing up the live feed on a window.

But the same errors lie with the video and live image detection.

$ ./imagenet.py /dev/video0 # V4L2 camera
$ ./imagenet.py /dev/video0 output.mp4

P.S: A weird part now is that, it looks it is about to open up a window less than a second and shuts off suddenly showing this error. Before without that popping of a window, the error would directly appear.

Regards,
Karishma

Ah okay, I see that you are trying to view the GUI window remotely from Windows - that won’t work with jetson-utils because it uses OpenGL/CUDA interoperability and doesn’t support X11 tunneling. You would need to attach a physical display to your Jetson to view it, or use RTP to stream the output video from your Jetson to your PC and view it there: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#rtp

Hai Sir,

I tried to do follow this tutorial jetson-inference/aux-streaming.md at master · dusty-nv/jetson-inference · GitHub.

But I am having issues like the following.

a) for these two commands,
$ video-viewer --input-codec=h264 rtp://@:1234 # recieve on localhost port 1234
$ video-viewer --input-codec=h264 rtp://224.0.0.0:1234 # subscribe to multicast group
$ video-viewer --input-codec=h264 rtp://192.168.171.101:1234
$ video-viewer --input-codec=h264 rtp://192.168.171.2:1234

the following is happening

b) video-viewer --bitrate=1000000 csi://0 rtp://:1234
video-viewer --bitrate=1000000 csi://0 rtp://192.168.171.2:1234

then tried the
$ gst-launch-1.0 -v udpsrc port=1234 *
** caps = “application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96” ! *

** rtph264depay ! decodebin ! videoconvert ! autovideosink**

also, the VLC player,

I am not sure what to do. I am extremely sorry for bothering you time and again. It is just that, this is all completely new and I am trying to learn how to use the JETSON.

  1. How do I know if my port is 1234 or something else?
  2. for the video streaming, transporting, recording and playback, do i have to run some terminals parallely on the main jetson as well as my laptop SSH into jetson?
  3. Or do I have to run parallel terminals, (one SSH into jetson and other without SSH) on my PC itself to get it right?

Thank you very much.

Regards,
Karishma

Hai Sir,

  1. Can ./imagenet (imagenet.cpp) /dev/video0 be used for live detection (when implemented outside jetson-inference directory) where the contents are same as in imagenet.cpp fiel along with CMakelist.txt file?
    P.S: My idea is to do the kind of my-recognition code or the imagenet.cpp to the live camera feed, instead of only calling up single images like, polar.jpg etc.

  2. Is it possible to implement the image detection and object detection in C language , instead of C++ and Python codes?

  3. when I follow the steps and the codes as in this query thread, Jetson Nano Object Detection C/C++ Example - #7 by dusty_nv

I am having multiple issues. Would you mind helping me with figuring it out please.

I did the

cmake .
make
make VERBOSE=1

the test file at this point had the code of this thread: Jetson Nano Object Detection C/C++ Example - #9 by dusty_nv

so removed the line : printf(" depth: %u (bpp)\n \n", camera->GetPixelDepth());

and then ~/test

$ ./test --width=640 --height=480

This the error I am facing.

Later I changed the test file to this

include “jetson-utils/gstCamera.h”
include “jetson-utils/glDisplay.h”
include “cudaFont.h”

include “jetson-inference/imageNet.h”
include “jetson-utils/commandLine.h”

include <signal.h>

include <jetson-inference/detectNet.h>

bool signal_recieved = false;

void sig_handler(int signo)
{
if( signo == SIGINT )
{
LogVerbose(“received SIGINT\n”);
signal_recieved = true;
}
}

int usage()
{
printf(“usage: imagenet [–help] [–network=NETWORK] …\n”);
printf(" input_URI [output_URI]\n\n");
printf(“Classify a video/image stream using an image recognition DNN.\n”);
printf(“See below for additional arguments that may not be shown above.\n\n”);
printf(“positional arguments:\n”);
printf(" input_URI resource URI of input stream (see videoSource below)\n");
printf(" output_URI resource URI of output stream (see videoOutput below)\n\n");
printf(“%s”, imageNet::Usage());
printf(“%s”, videoSource::Usage());
printf(“%s”, videoOutput::Usage());
printf(“%s”, Log::Usage());
return 0;
}

int main( int argc, char** argv )
{
/*

  • parse command line
    */
    commandLine cmdLine(argc, argv);

if( cmdLine.GetFlag(“help”) )
return usage();

/*

  • attach signal handler
    */
    if( signal(SIGINT, sig_handler) == SIG_ERR )
    LogError(“can’t catch SIGINT\n”);

/*

  • create input stream
    /
    videoSource
    input = videoSource::Create(cmdLine, ARG_POSITION(0));

if( !input )
{
LogError(“imagenet: failed to create input stream\n”);
return 1;
}

/*

  • create output stream

/
videoOutput
output = videoOutput::Create(cmdLine, ARG_POSITION(1));

if( !output )
LogError(“imagenet: failed to create output stream\n”);

/*

  • create font for image overlay
    /
    cudaFont
    font = cudaFont::Create();

if( !font )
{
LogError(“imagenet: failed to load font for overlay\n”);
return 1;
}

/*

  • create recognition network
    /
    imageNet
    net = imageNet::Create(cmdLine);

if( !net )
{
LogError(“imagenet: failed to initialize imageNet\n”);
return 1;
}

/*

  • processing loop
    /
    while( !signal_recieved )
    {
    // capture next image image
    uchar3
    image = NULL;

    if( !input->Capture(&image, 1000) )
    {
    // check for EOS
    if( !input->IsStreaming() )
    break;

     LogError("imagenet:  failed to capture next frame\n");
     continue;
    

    }

    // classify image
    float confidence = 0.0f;
    const int img_class = net->Classify(image, input->GetWidth(), input->GetHeight(), &confidence);

    if( img_class >= 0 )
    {
    LogVerbose(“imagenet: %2.5f%% class #%i (%s)\n”, confidence * 100.0f, img_class, net->GetClassDesc(img_class));

     // overlay class label onto original image
     char str[256];
     sprintf(str, "%05.2f%% %s", confidence * 100.0f, net->GetClassDesc(img_class));
     font->OverlayText(image, input->GetWidth(), input->GetHeight(),
     			   str, 5, 5, make_float4(255, 255, 255, 255), make_float4(0, 0, 0, 100));
    

    }

    // render outputs
    if( output != NULL )
    {
    output->Render(image, input->GetWidth(), input->GetHeight());

     // update status bar
     char str[256];
     sprintf(str, "TensorRT %i.%i.%i | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, net->GetNetworkName(), net->GetNetworkFPS());
     output->SetStatus(str);	
    
     // check if the user quit
     if( !output->IsStreaming() )
     	signal_recieved = true;
    

    }

    // print out timing info
    net->PrintProfilerTimes();
    }

/*

  • destroy resources
    */
    LogVerbose(“imagenet: shutting down…\n”);

SAFE_DELETE(input);
SAFE_DELETE(output);
SAFE_DELETE(net);

LogVerbose(“imagenet: shutdown complete.\n”);
return 0;
}

and the CMAKELIST.TXT file contains the following

cmake_minimum_required(VERSION 2.8)

include_directories(${PROJECT_INCLUDE_DIR} ${PROJECT_INCLUDE_DIR}/jetson-inference ${PROJECT_INCLUDE_DIR}/jetson-utils)
include_directories(/usr/include/gstreamer-1.0 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/include /usr/include/glib-2.0 /usr/include/libxml2 /usr/lib/aarch64-linux-gnu/glib-2.0/include/ /usr/local/include/jetson-utils)

//declare my-recognition project
project(test)

file(GLOB imagenetCameraSources *.cpp)
file(GLOB imagenetCameraIncludes *.h )

find_package(jetson-utils)
find_package(jetson-inference)
find_package(CUDA)

//add directory for libnvbuf-utils to program
link_directories(/usr/lib/aarch64-linux-gnu/tegra)

cuda_add_executable(test ${imagenetCameraSources})

target_link_libraries(test jetson-inference)

install(TARGETS test DESTINATION bin)

if I run

$./test --width=640 --height=480 --threshold=0.2 through SSH remote

I am totally lost. Kindly excuse my lack of knowledge in this stream.

Regards,
Karishma

Hi @karishmathumu, these commands are for receiving RTP on your Jetson from an external source, so you don’t need to run these. What you want to do is transmit RTP from your Jetson to your PC, which is what the next b) commands you ran do.

The errors say that you are missing the nvv4l2h264enc element, which is a similar error that you were getting before with the missing nvv4l2decoder element, so it seems like these NVIDIA GStreamer elements still aren’t installed on your device.

Also it reports that it’s unable to open your MIPI CSI camera, because you ran it with csi://0 Since I believe you are using V4L2 camera, run it with /dev/video0 instead of csi://0

The port is whatever you set it to when you launched the command on Jetson. So if you run imagenet /dev/video0 rtp://192.168.171.2:1234, it will be port 1234.

You would run one SSH terminal into jetson that runs imagenet /dev/video0 rtp://192.168.171.2:1234
And then you run another one on your PC that runs VLC or GStreamer on your PC to receive/view the RTP stream.

I think you have the right idea, you just need to specify /dev/video0 because if you don’t specify an input source, it uses csi://0 by default. So try running ./test /dev/video0 instead

Hai Sir,

I would like explain my hardware setup for an idea. I am remotely connected from my PC (windows OS) to the Jetson Orin (having a monitor) through the MobaXterm application.

My goal is to get the what all videostreams that are working on actual Jetson monitor onto my PC.
The images and imagenet are working on the actual Jetson , but not being imported/trasnported to my PC.
Whereas, the test file is not working on the actual Jetson too.

I have tried the,

$ gst-inspect-1.0 | grep nvv4l2

but then I ran these,

Download test video (thanks to jell.yfish.us)

$ wget https://nvidia.box.com/shared/static/tlswont1jnyu3ix2tbf7utaekpzcx4rc.mkv -O jellyfish.mkv

C++

$ ./imagenet --network=resnet-18 jellyfish.mkv images/test/jellyfish_resnet18.mkv
(here it is going on recurring until I do the Ctrl+Z)

Python

$ ./imagenet.py --network=resnet-18 jellyfish.mkv images/test/jellyfish_resnet18.mkv
( here it is stopping on its own in a moment)

This resulted. ( I am not sure, if it is all error or not) ( but still the video streaming is not being able to transport from the original Jetson to my PC with remote SSH connection.

Whereas, all these commands are giving the correct results as shown in the tutorial, on the actual Jetson Monitor (not SSHed)

No Sir.

./test /dev/video0

This is not giving any stream even on the Jetson with the monitor.

An empty black window opens up showing nothing.
and on the terminal the following errors are displayed.

But here, I am not sure, which imagenet.cpp code should I use.

or

I am logged in remotely into the JETSON PC from my PC and ran the below

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build$ video-viewer --bitrate=1000000 dev/video0 rtp://192.168.171.2:1234

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build$ imagenet --bitrate=1000000 dev/video0 rtp://192.168.171.2:1234

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build$ imagenet dev/video0 rtp://192.168.171.2:1234

while, all the below commands result in the video stream on the Jetson Monitor.

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build/aarch64/bin $ wget https://nvidia.box.com/shared/static/tlswont1jnyu3ix2tbf7utaekpzcx4rc.mkv -O jellyfish.mkv

C++

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build/aarch64/bin $ ./imagenet --network=resnet-18 jellyfish.mkv images/test/jellyfish_resnet18.mkv

Python

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build/aarch64/bin $ ./imagenet.py --network=resnet-18 jellyfish.mkv images/test/jellyfish_resnet18.mkv

and also for the below commands, I am able to see the video stream on the main Jetson PC

sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build $ imagenet /dev/video0
sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build $ imagenet --bitrate=1000000 /dev/video0
sesotec-ai-2@Sesotec-AI-2:~/tk_ws/src/jetson-inference/build/aarch64/bin $ imagenet --bitrate=1000000 /dev/video0

Videos are streamed, with live detection, image and objects detection on the Main Jetson Monitor, but I am trying to do all the same commands from my PC through SSH onto the Jetson, none of these are working.

Also, I have some other questions,

  1. Is it possible to implement the image detection and object detection in C language with the Jetson , instead of C++ and Python codes?
  2. Can I upload my own dataset images( as per my requirements) to these open source datasets and train and test them ?

Regards,
Karishma.

No, jetson-inference library is implemented with C++ and Python APIs

You can find training tutorials included in the Hello AI World. In theory you could augment other datasets with your own.

I think your issue is that you have a display attached to the Jetson, but are running the commands from the SSH terminal, so it tries to use the display but can’t. Try running the commands from the terminal on the Jetson itself. Or run export DISPLAY=:0 in your SSH terminal first. Or run the program with the --headless flag and it will skip trying to open an OpenGL window on the Jetson.

Try this:

./imagenet --network=resnet-18 --headless jellyfish.mkv rtp://192.168.171.2:1234

Greetings Sir,

I have tried as you said,
1) SSH: this worked with no errors (no display on my PC though)

$ ./imagenet --network=resnet-18 --headless jellyfish.mkv rtp://192.168.171.2:1234

2) SSH: Then I also, tried the below: ( result was running with no other errors except for no display on my PC &
error of Open GL - failed to open X11 server connection and failed to create X11 window=

export DISPLAY=:0
./imagenet --network=resnet-18 jellyfish.mkv rtp://192.168.171.2:1234

3) SSH:

export DISPLAY=:0
./imagenet.py /dev/video0

Result: OpenGL error x11 server.

Followed this thread, but it did not help. OpenGL failed to open X11 server connection · Issue #890 · dusty-nv/jetson-inference · GitHub

4) SSH : I then moved to the Object detection part of the tutorial.

$ cd jetson-inference/tools
$ ./download-models.sh

C++

$ ./detectnet --network=ssd-inception-v2 input.jpg output.jpg

Python

$ ./detectnet.py --network=ssd-inception-v2 input.jpg output.jpg



$ ./detectnet /dev/video0 # V4L2 camera
$ ./detectnet /dev/video0 output.mp4 # save to video file

5)SSH with Jetson monitor disconnected: I removed the monitor connection to the Jetson and ran the following and the following errors are occurring.

$ wget https://nvidia.box.com/shared/static/tlswont1jnyu3ix2tbf7utaekpzcx4rc.mkv -O jellyfish.mkv

C++

$ ./imagenet --network=resnet-18 jellyfish.mkv images/test/jellyfish_resnet18.mkv
$ ./imagenet --network=resnet-18 jellyfish.mkv rtp://192.168.171.2:1234

Above two commands result in the same output as below.

6) SSH and Jetson Monitor disconnected:

./imagenet.py /dev/video0
Result:

Sorry, but I don’t understand what am I missing?

Regards,
Karishma

What are you running on your PC to view the RTP stream? I recommend using GStreamer like shown here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#viewing-rtp-remotely

SSD-Inception-v2 isn’t selected by default in the Model Downloader tool, you need to explicitly select it. Or just use the SSD-Mobilenet-v2 model instead. The other error you get is from specifying invalid image filename to process (input.jpg doesn’t exist, change it to an image under the images/ folder)

I think you have X11 Tunneling enabled in your SSH connection, please disable it and then it won’t try creating the window and giving you those OpenGL errors.

Hai Sir,

Sorry for not being clear. I apologize.

$ ./imagenet --network=resnet-18 --headless jellyfish.mkv rtp://192.168.171.2:1234

I ran the above command through SSH, (without export DISPLAY=:0), as you said in the last response.

*******coming to the using Gstreamer part as in that tutorial, I ran the below part in a terminal. I visually see nothing happening when I run these commands. Am I missing something, Sir?

********should I run the below commands parallely in two terminals like this? ( ran with Jetson Monitor disconnected and through SSH and no export DISPLAY=:0)

Terminal 1:

$ gst-launch-1.0 -v udpsrc port=1234
caps = “application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96” !
rtph264depay ! decodebin ! videoconvert ! autovideosink

Terminal 2:

./imagenet --network=resnet-18 --headless jellyfish.mkv rtp://192.168.171.2:1234

The following happened…

  1. I did use the SSD-Mobilenet-v2 and with the image that is present in the image folder, but it shows the same kind of errors.

Jetson-inference/build/aarch64/bin $./detectnet --network=ssd-mobilenet-v2 desk.jpg output_desk.jpg
and also,
Jetson-inference/data/images $ ./detectnet --network=ssd-mobilenet-v2 desk.jpg output_desk.jpg

  1. I am not sure how to disable the X11 tunneling , Sir? Would you please let me know hwo to do that.
    ********I ran the below command without using this command -------> export DISPLAY=:0

./imagenet.py /dev/video0

Regards,
Karishma.

You should run this on your PC (not through SSH) - this should execute on your PC, not on your Jetson. Or you could try using VLC on your PC. But I’ve found viewing the video on your PC to be more reliable through GStreamer.

Try running it like this:

jetson-inference/build/aarch64/bin$ ./detectnet --network=ssd-mobilenet-v2 images/desk.jpg images/test/output_desk.jpg

It’s in the MobaXTerm session options - go to your Session settings -> Advanced SSH Settings then make sure that X11-Forwarding is unchecked. Restart the session if needed.

Hai Sir,

Sorry Sir, I have been sick the last week. Will try as you said and will come back to you soon.

Thank you for your patience and time.

Regards,
Karishma

1.

This worked sir. Thank you.

2.

Found the X11 forwarding checkbox Sir.

3.

Unfortunately, I am still struggling with this part sir. Will try again and let you know.

4.

And I am trying something different with the https://github.com/dusty-nv/jetson-inference/blob/master/examples/my-recognition/my-recognition.cpp code, Sir.

I need your help with this. Any kind of insights will be definitely helpful.

I tried to run this line in the above mentioned my-recognition.cpp code on Visual Code as

const char* imgFilename = “/home/sesotec-ai-2/tk-recognition/polar_bear.jpg”;

It worked. Now I have got to test the same with my input, which is a line ( having [line x channels x length] as its size) I am wondering how do I replace the line in my-recognition.cpp with this. Is it even possible?
My buffer can handle lines up to a size of 4096 pixel, 16 channels and, 512 lines.

Best regards,
Karishma

Hi @karishmathumu, instead of loading an image from file, you can use my cudaAllocMapped() function to allocate shared CPU/GPU memory, which you can then fill with your own image data. Then you can pass that CUDA buffer to imageNet::Classify().

For an example of allocating your own image buffers, see here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#image-allocation

Can it be applied to a 1Dimensional - line Buffer of RGB input?