Hi @karishmathumu, due to the incomplete output from v4l2-ctl (there are no output formats listed), I think there is an underlying issue with your USB camera and/or it’s driver. Are you able to try a different USB camera?
P.S: A weird part now is that, it looks it is about to open up a window less than a second and shuts off suddenly showing this error. Before without that popping of a window, the error would directly appear.
Ah okay, I see that you are trying to view the GUI window remotely from Windows - that won’t work with jetson-utils because it uses OpenGL/CUDA interoperability and doesn’t support X11 tunneling. You would need to attach a physical display to your Jetson to view it, or use RTP to stream the output video from your Jetson to your PC and view it there: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#rtp
a) for these two commands, $ video-viewer --input-codec=h264 rtp://@:1234 # recieve on localhost port 1234 $ video-viewer --input-codec=h264 rtp://224.0.0.0:1234 # subscribe to multicast group $ video-viewer --input-codec=h264 rtp://192.168.171.101:1234 $ video-viewer --input-codec=h264 rtp://192.168.171.2:1234
I am not sure what to do. I am extremely sorry for bothering you time and again. It is just that, this is all completely new and I am trying to learn how to use the JETSON.
How do I know if my port is 1234 or something else?
for the video streaming, transporting, recording and playback, do i have to run some terminals parallely on the main jetson as well as my laptop SSH into jetson?
Or do I have to run parallel terminals, (one SSH into jetson and other without SSH) on my PC itself to get it right?
Can ./imagenet (imagenet.cpp) /dev/video0 be used for live detection (when implemented outside jetson-inference directory) where the contents are same as in imagenet.cpp fiel along with CMakelist.txt file?
P.S: My idea is to do the kind of my-recognition code or the imagenet.cpp to the live camera feed, instead of only calling up single images like, polar.jpg etc.
Is it possible to implement the image detection and object detection in C language , instead of C++ and Python codes?
int usage()
{
printf(“usage: imagenet [–help] [–network=NETWORK] …\n”);
printf(" input_URI [output_URI]\n\n");
printf(“Classify a video/image stream using an image recognition DNN.\n”);
printf(“See below for additional arguments that may not be shown above.\n\n”);
printf(“positional arguments:\n”);
printf(" input_URI resource URI of input stream (see videoSource below)\n");
printf(" output_URI resource URI of output stream (see videoOutput below)\n\n");
printf(“%s”, imageNet::Usage());
printf(“%s”, videoSource::Usage());
printf(“%s”, videoOutput::Usage());
printf(“%s”, Log::Usage());
return 0;
}
int main( int argc, char** argv )
{
/*
parse command line
*/
commandLine cmdLine(argc, argv);
Hi @karishmathumu, these commands are for receiving RTP on your Jetson from an external source, so you don’t need to run these. What you want to do is transmit RTP from your Jetson to your PC, which is what the next b) commands you ran do.
The errors say that you are missing the nvv4l2h264enc element, which is a similar error that you were getting before with the missing nvv4l2decoder element, so it seems like these NVIDIA GStreamer elements still aren’t installed on your device.
Also it reports that it’s unable to open your MIPI CSI camera, because you ran it with csi://0 Since I believe you are using V4L2 camera, run it with /dev/video0 instead of csi://0
The port is whatever you set it to when you launched the command on Jetson. So if you run imagenet /dev/video0 rtp://192.168.171.2:1234, it will be port 1234.
You would run one SSH terminal into jetson that runs imagenet /dev/video0 rtp://192.168.171.2:1234
And then you run another one on your PC that runs VLC or GStreamer on your PC to receive/view the RTP stream.
I think you have the right idea, you just need to specify /dev/video0 because if you don’t specify an input source, it uses csi://0 by default. So try running ./test /dev/video0 instead
I would like explain my hardware setup for an idea. I am remotely connected from my PC (windows OS) to the Jetson Orin (having a monitor) through the MobaXterm application.
My goal is to get the what all videostreams that are working on actual Jetson monitor onto my PC.
The images and imagenet are working on the actual Jetson , but not being imported/trasnported to my PC.
Whereas, the test file is not working on the actual Jetson too.
This resulted. ( I am not sure, if it is all error or not) ( but still the video streaming is not being able to transport from the original Jetson to my PC with remote SSH connection.
Whereas, all these commands are giving the correct results as shown in the tutorial, on the actual Jetson Monitor (not SSHed)
Videos are streamed, with live detection, image and objects detection on the Main Jetson Monitor, but I am trying to do all the same commands from my PC through SSH onto the Jetson, none of these are working.
Also, I have some other questions,
Is it possible to implement the image detection and object detection in C language with the Jetson , instead of C++ and Python codes?
Can I upload my own dataset images( as per my requirements) to these open source datasets and train and test them ?
No, jetson-inference library is implemented with C++ and Python APIs
You can find training tutorials included in the Hello AI World. In theory you could augment other datasets with your own.
I think your issue is that you have a display attached to the Jetson, but are running the commands from the SSH terminal, so it tries to use the display but can’t. Try running the commands from the terminal on the Jetson itself. Or run export DISPLAY=:0 in your SSH terminal first. Or run the program with the --headless flag and it will skip trying to open an OpenGL window on the Jetson.
2) SSH: Then I also, tried the below: ( result was running with no other errors except for no display on my PC &
error of Open GL - failed to open X11 server connection and failed to create X11 window=
SSD-Inception-v2 isn’t selected by default in the Model Downloader tool, you need to explicitly select it. Or just use the SSD-Mobilenet-v2 model instead. The other error you get is from specifying invalid image filename to process (input.jpg doesn’t exist, change it to an image under the images/ folder)
I think you have X11 Tunneling enabled in your SSH connection, please disable it and then it won’t try creating the window and giving you those OpenGL errors.
*******coming to the using Gstreamer part as in that tutorial, I ran the below part in a terminal. I visually see nothing happening when I run these commands. Am I missing something, Sir?
********should I run the below commands parallely in two terminals like this? ( ran with Jetson Monitor disconnected and through SSH and no export DISPLAY=:0)
I am not sure how to disable the X11 tunneling , Sir? Would you please let me know hwo to do that.
********I ran the below command without using this command -------> export DISPLAY=:0
You should run this on your PC (not through SSH) - this should execute on your PC, not on your Jetson. Or you could try using VLC on your PC. But I’ve found viewing the video on your PC to be more reliable through GStreamer.
It’s in the MobaXTerm session options - go to your Session settings -> Advanced SSH Settings then make sure that X11-Forwarding is unchecked. Restart the session if needed.
It worked. Now I have got to test the same with my input, which is a line ( having [line x channels x length] as its size) I am wondering how do I replace the line in my-recognition.cpp with this. Is it even possible?
My buffer can handle lines up to a size of 4096 pixel, 16 channels and, 512 lines.
Hi @karishmathumu, instead of loading an image from file, you can use my cudaAllocMapped() function to allocate shared CPU/GPU memory, which you can then fill with your own image data. Then you can pass that CUDA buffer to imageNet::Classify().