I want to compile some work with jetson-inference. How can I build the environment on my PC? (Windows 10, GeForce GTX 1060)
Hi @diegoapp, I have never attempted to build the project on Windows or cross-compile it, so I’m not sure if it’s possible. There are several dependencies like CUDA Toolkit, TensorRT, GStreamer, and Linux-specific code in jetson-utils that you would need to resolve first if you were to attempt it. I would recommend to stick with the way of building it on Jetson or use the jetson-inference container. In addition to compiling, there are a number of other setup steps that it does on your Jetson, like installing packages, downloading models and creating symlinks, installing PyTorch/torchvision, ect (if you are using the container, this is done inside the container)
hi @dusty_nv
Thanks for answering!! I’ve switched to ubuntu 20.04 LTS and I managed it to build it from source successfully and run all images examples, I only have one problem. When trying to classify videos it gives me this error:
[gstreamer] initialized gstreamer, version 1.16.2.0
[gstreamer] gstDecoder – creating decoder for videos/jellyfish.mkv
[gstreamer] gstDecoder – discovered video resolution: 1280x720 (framerate 29.970030 Hz)
[gstreamer] gstDecoder – discovered video caps: video/x-h264, level=(string)4.1, profile=(string)high, codec_data=(buffer)01640029ffe1001867640029acd9405005bb0110000027a000094720f183196001000568ebecb22c, stream-format=(string)avc, alignment=(string)au, width=(int)1280, height=(int)720, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=videos/jellyfish.mkv ! matroskademux ! queue ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[gstreamer] gstDecoder – failed to create pipeline
[gstreamer] (no element “omxh264dec”)
[gstreamer] gstDecoder – failed to create decoder for
I can’t find a way to install omxh264dec on a x86-64 machine, maybe you know a way to fix it or workaround?
Hi @diegoapp, those omx elements are for Jetson. You would need to replace them with the normal GStreamer elements in gstDecoder.cpp. Or it may just be easier to use OpenCV capture to get the videos.
Hello @dusty_nv
How do I replace them? Do I have to change the code in each model? And if it were like that how would I go about doing that?
Thanks for the help!!!
You would need to change over jetson-inference/utils/codec/gstDecoder.cpp
to use the normal x86 GStreamer elements here:
It is not a change for each model, just gstDecoder.cpp. Then you re-run make
and sudo make install
Hello @dusty_nv
That fixed it, but only for one decoding. Let me explain myselfy, I tried it and it worked, I changed it to the x86 GStreamer elements but when I tried it for a second time these errors showed up:
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] engine.cpp (1022) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered)
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
RingBuffer – failed to allocate zero-copy buffer of 1382415 bytes
[gstreamer] gstDecoder – failed to allocate 4 buffers (1382415 bytes each)
[TRT] FAILED_EXECUTION: std::exception
[TRT] failed to execute TensorRT context on device GPU
[TRT] imageNet::Process() failed
[gstreamer] gstDecoder – failed to retrieve next ringbuffer for writing
[TRT] …/rtExt/cuda/cudaFusedConvActRunner.cpp (95) - Cuda Error in destroyFilterTexture: 700 (an illegal memory access was encountered)
[TRT] INTERNAL_ERROR: std::exception
[TRT] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)
I reboot my PC, everything works fine but only for the first time. Also there is no output video. Maybe it’s because this error still shows up:
[gstreamer] gstEncoder – failed to create pipeline
[gstreamer] (no element “omxh264enc”)
[gstreamer] gstEncoder – failed to create encoder engine
Even though I changed the decoders. I’ll attach the gstDecoder.cpp maybe I changed something I wasn’t supposed to.
gstDecoder.cpp (29.1 KB)
Thanks a lot for helping me!!!
If you want to encode video, you would need to change the encoders too in gstEncoder.cpp.
Do you get the memory errors if you just process an image?
Hi @dusty_nv
When processing images the image gets rendered and all good only this error shows up but it still works:
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
But after changing the encoder, video processing doesn’t work now even the first time, I get this error:
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] jetson-inference/build/x86_64/include/jetson-utils/cudaMappedMemory.h:54
RingBuffer – failed to allocate zero-copy buffer of 1382415 bytes
[gstreamer] gstDecoder – failed to allocate 4 buffers (1382415 bytes each)
[TRT] engine.cpp (1022) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered)
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, language-code=(string)en, minimum-bitrate=(uint)20293787, maximum-bitrate=(uint)47758085, bitrate=(uint)35584836;
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, language-code=(string)en, minimum-bitrate=(uint)18777064, maximum-bitrate=(uint)47758085, bitrate=(uint)34430981;
[TRT] FAILED_EXECUTION: std::exception
[TRT] failed to execute TensorRT context on device GPU
[TRT] imageNet::Process() failed
Traceback (most recent call last):
File “imagenet.py”, line 68, in
class_id, confidence = net.Classify(img)
Exception: jetson.inference – imageNet.Classify() encountered an error classifying the image
[TRT] …/rtExt/cuda/cudaFusedConvActRunner.cpp (95) - Cuda Error in destroyFilterTexture: 700 (an illegal memory access was encountered)
[TRT] INTERNAL_ERROR: std::exception
[TRT] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)
I also get these errors but I don’t think they’re related:
jetson.utils – compiled without NumPy array conversion support (warning)
jetson.utils – if you wish to have support for converting NumPy arrays,
jetson.utils – first run ‘sudo apt-get install python-numpy python3-numpy’
I did the last command and everything is up to date.
Edit:
I tried the video processing and I got the errors I said before, then I tried image and image worked. I replied to you and tried to process video again and now it renders the video but when it saves the processed video there is only a txt file (empty.mkv). It might be because of this line which shows a lot :
[gstreamer] gstEncoder – pipeline full, skipping frame (1382400 bytes)
Edit 2:
I reboot and try to process video again I get these errors:
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] jetson-inference/build/x86_64/include/jetson-utils/RingBuffer.inl:119
[cuda] /jetson-inference/build/x86_64/include/jetson-utils/cudaMappedMemory.h:51
RingBuffer – failed to allocate zero-copy buffer of 1382415 bytes
[gstreamer] gstDecoder – failed to allocate 4 buffers (1382415 bytes each)
[TRT] …/rtSafe/cuda/caskConvolutionRunner.cpp (408) - Cask Error in checkCaskExecError: 11 (Cask Convolution execution)
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
RingBuffer – failed to allocate zero-copy buffer of 1382415 bytes
[gstreamer] gstDecoder – failed to allocate 4 buffers (1382415 bytes each)
[TRT] FAILED_EXECUTION: std::exception
[TRT] failed to execute TensorRT context on device GPU
[TRT] imageNet::Process() failed
[TRT] …/rtExt/cuda/cudaFusedConvActRunner.cpp (95) - Cuda Error in destroyFilterTexture: 700 (an illegal memory access was encountered)
[TRT] INTERNAL_ERROR: std::exception
[TRT] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)
Thanks for helping, cheers.
Hi @diegoapp, unfortunately I am unable to provide much in-depth assistance to get it working on x86, because the project is supported for Jetson devices.
Are you able to run just the video-viewer program without inference to test the video decode/encode?
Hi @dusty_nv
When I try the video-viewer program in the bin folders videos aren’t saved either. Must be something wrong in the encoder or how it frees memory because I’m getting memory errors:
in glDisplay.cpp:603 —> CUDA(cudaMemcpy(tex_map, img, interopTex->GetSize(), cudaMemcpyDeviceToDevice));
cudaMappedMemory.h:54 ----> if( CUDA_FAILED(cudaHostGetDevicePointer(gpuPtr, *cpuPtr, 0)) )
I’m trying to find whats wrong. If you have any ideas to try tell me :)
Edit:
I’ve found that when I process images the encoder:
gstEncoder video options:
– URI: file:////jetson-inference/build/x86_64/bin/videos/test/jellyfish_resnet18.mkv
- protocol: file
- location: videos/test/jellyfish_resnet18.mkv
- extension: mkv
– deviceType: file
– ioType: output
– codec: h264
– width: 0
– height: 0
– frameRate: 30.000000
– bitRate: 4000000
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
the width and height are 0, maybe that’s where the problem is.
Cheers
That’s normal for encoder width/height to be 0 before it picks up the incoming width/height from the first frame that it encodes. But the --output-width
and --output-height
options set it manually.
If you also run video-viewer with --headless
flag, it will turn off glDisplay and see if that helps. This repo is setup for memory management on Jetson, where the CPU/GPU memory is shared and there does not need to be memory copies between CPU/GPU.
Hello @dusty_nv
The video-viewer with --headless flag doesn’t has no video output. I get this error in the last few lines:
video-viewer: failed to capture video frame
And this line doesn’t show up red but I think this is the last step to make it work:
[gstreamer] gstEncoder – pipeline full, skipping frame
In video processing it always say pipeline full skipping frame. I’ve been searching hours for the error and I’ve reinstalled the whole repo a couple of times. But the main error I think might be on the flags, I get this error (lines aren’t red):
[gstreamer] gstreamer mysource ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline1/GstAppSrc:mysource:
streaming stopped, reason not-negotiated (-4)
And in this forum I found this answer:
http://gstreamer-devel.966125.n4.nabble.com/Internal-data-flow-error-td4679306.html
not-negotiated usually means a problem with caps/format negotiation
somewhere.
Maybe that means something to you because when I tried changing the flags in the gstDecoder.cpp and I had to reinstall the repo because I kept getting an error even though I reverted everything back,
Cheers
Typically the dataflow error occurs when something is misconfigured in the pipeline, like for example perhaps the elements you replaced my elements with have different input/output formats. At this point, you may find it easier just to use OpenCV capture on PC. There are also TensorRT samples for PC. The jetson-inference project is built/tested on Jetson and not PC - sorry about that.
Okay, thank you so much @dusty_nv