Deep Reinforcement Learning in Robotics

I am working on two days to demo tutorial. i am at the deep reinforcement learning in robotics:
https://developer.nvidia.com/embedded/twodaystoademo#collapseFour

I am at “Building from Source”

Here is the step:
$ sudo apt-get install cmake
$ git clone GitHub - dusty-nv/jetson-reinforcement: Deep reinforcement learning GPU libraries for NVIDIA Jetson TX1/TX2 with PyTorch, OpenAI Gym, and Gazebo robotics simulator.
$ cd jetson-reinforcement
$ git submodule update --init
$ mkdir build
$ cd build
$ cmake …/
$ make

An error occurs at “cmake …/” step.
As a result, no PyTorch is not available.

Please see attached two log files: Cmake Error and Cmake Output log files.

I have tried twice, but both failed at the same place. Here is what I did:

  1. Flash SD card for Nano
  2. Initial Ubuntu setup
  3. Directly jump to the above step.

Please help me! I want to try out this tutorial.

Thanks!

CMakeError.log (2.8 KB) CMakeOutput.log (45.6 KB)

Hi I’ve had a few issues too, Usually found out its some dependency has been updated to a newer version. I think they just reposted an updated dependancy list to show which particular versions you need as some of them had bits left out as obsolete to new roll out. Hope this helps feel free to share your resolve as i may run into it myself at some stage and am always curious about others experience with the system! I’m only new to platform and haven’t programmed in nearly 15 years so might be way off though so good luck and best wishes :)

Thanks for your quick response! Yes, I agree with you. Jetpack 4.4 might uses a slight different dependency tree, which causes the error, which stops building PyTorch, which is a part of Jetson-reinforcement.

I am actually able to build PyTorch with the normal jetson-inference.

I am thinking about making PyTorch available form jetson-inference. I don’t know how though at this point.

Still it is the best if the jetson-reinforcement branch works by itself. Hopefully, somebody can help.

1 Like

Hi @dkcog123, the DQN code from jetson-reinforcement is based on older version of PyTorch (0.3) and newer versions of PyTorch have breaking non-backwards-compatible changes. I believe the version of Gazebo has also updated with changes. I have added a disclaimer and some more recent resources about RL to the top of the readme, sorry about that -

https://github.com/dusty-nv/jetson-reinforcement/blob/master/README.md

1 Like

Ps in case I haven’t said it in a while “thanks dusty your an inspiration!” Between you and the team at Nvidia my business partner and I have started down the path of creating invercargill first maker space. We’ve collected approximately 80% of the tooling to build lab space for 3 trainers and 8 labs spaces. Setting it up with metal deposition stations, compressed air and vacuum feeds with 3.3, 5,12,and 20 volt supplies at each station an almost 1 cubic metre 3d printer that has a built in vacuum forming bed and a print farm of smaller bed higher definition 3d printers using recycled plastic from a friend’s e waste plant also doing training seminars and fitting each station with a jetson nano my business partner planning the details of that side we play to our strengths 😁! Sorry I digress exciting times but to put a bit of perspective on why I wanted you to know how much the world needs more people like you you have literally bought technology to the “A*#'hole of the world!” Quote by Mick Jeager (apparently) just something to remind you guys your helping to make a better world and we thank you ☺️

1 Like

@therealleonmusk, thank you, it’s my pleasure. Great to hear about your maker space, that is very exciting! Great that you are bringing all that technology into the hands of others.

1 Like

I am not 100% understand what you mean in your blog as follows:

note : this repo supports PyTorch v0.3 and JetPack 3.2. For newer examples, check out:
- openai_ros package
- gym_gazebo2 repo
- Isaac SDK samples

I’ve briefly checked out each of the above. I cannot understand fully. They seem independent from “Deep Reinforcement Learning in Robotics.”

Is there any way to execute “Deep Reinforcement Learning in Robotics” on my new Xavier NX?

If not, how to create as a similar capability with Xavier NX?

Thanks in advance!

Hi @dkcog123, each of those resources is using reinforcement learning in ROS (Robot Operating System) , the Gazebo robotics simulator, or Isaac robotics SDK.

Not with the DQN code in that project as-is, because it was written for PyTorch 0.3. And with PyTorch 0.4 and newer, there were some breaking API changes in PyTorch such that it doesn’t run. Unfortunately PyTorch 0.3 can’t be installed on Xavier NX because it is older and uses older version of cuDNN than Xavier. The Gazebo simulator version has also updated (it is now Gazebo9).

You can try using this upstream PyTorch DQN learner that I believe should work with newer versions of PyTorch that can be installed on Xavier NX: Reinforcement Learning (DQN) Tutorial — PyTorch Tutorials 2.1.1+cu121 documentation

https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.h

I got another road block. Please help me. Thanks in advance!

Here comes an error message:

(This is run on l4t-ml:r32.4.2-py3 container)


NoSuchDisplayException Traceback (most recent call last)
in
37 env.reset()
38 plt.figure()
—> 39 plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),
40 interpolation=‘none’)
41 plt.title(‘Example extracted screen’)

in get_screen()
12 # Returned screen requested by gym is 400x600x3, but is sometimes larger
13 # such as 800x1200x3. Transpose it into torch order (CHW).
—> 14 screen = env.render(mode=‘rgb_array’).transpose((2, 0, 1))
15 # Cart is in the lower half, so strip off the top and bottom of the screen
16 _, screen_height, screen_width = screen.shape

/usr/local/lib/python3.6/dist-packages/gym/envs/classic_control/cartpole.py in render(self, mode)
172
173 if self.viewer is None:
→ 174 from gym.envs.classic_control import rendering
175 self.viewer = rendering.Viewer(screen_width, screen_height)
176 l, r, t, b = -cartwidth / 2, cartwidth / 2, cartheight / 2, -cartheight / 2

/usr/local/lib/python3.6/dist-packages/gym/envs/classic_control/rendering.py in
23
24 try:
—> 25 from pyglet.gl import *
26 except ImportError as e:
27 raise ImportError(‘’’

/usr/local/lib/python3.6/dist-packages/pyglet/gl/init.py in
242 # trickery is for circular import
243 _pyglet.gl = _sys.modules[name]
→ 244 import pyglet.window

/usr/local/lib/python3.6/dist-packages/pyglet/window/init.py in
1878 if not _is_pyglet_doc_run:
1879 pyglet.window = sys.modules[name]
→ 1880 gl._create_shadow_window()

/usr/local/lib/python3.6/dist-packages/pyglet/gl/init.py in _create_shadow_window()
218
219 from pyglet.window import Window
→ 220 _shadow_window = Window(width=1, height=1, visible=False)
221 _shadow_window.switch_to()
222

/usr/local/lib/python3.6/dist-packages/pyglet/window/xlib/init.py in init(self, *args, **kwargs)
163 self._event_handlers[message] = func
164
→ 165 super(XlibWindow, self).init(*args, **kwargs)
166
167 global _can_detect_autorepeat

/usr/local/lib/python3.6/dist-packages/pyglet/window/init.py in init(self, width, height, caption, resizable, style, fullscreen, visible, vsync, display, screen, config, context, mode)
568
569 if not display:
→ 570 display = pyglet.canvas.get_display()
571
572 if not screen:

/usr/local/lib/python3.6/dist-packages/pyglet/canvas/init.py in get_display()
92
93 # Otherwise, create a new display and return it.
—> 94 return Display()
95
96

/usr/local/lib/python3.6/dist-packages/pyglet/canvas/xlib.py in init(self, name, x_screen)
121 self._display = xlib.XOpenDisplay(name)
122 if not self._display:
→ 123 raise NoSuchDisplayException(‘Cannot connect to “%s”’ % name)
124
125 screen_count = xlib.XScreenCount(self._display)

NoSuchDisplayException: Cannot connect to “None”

1 Like

It seems like it is not able to find the display inside the container.

Try launching the container like this to enable the display inside the container:

$ sudo xhost +si:localuser:root
$ sudo docker run --runtime nvidia -it --rm --network host -e DISPLAY=$DISPLAY \
    -v /tmp/.X11-unix/:/tmp/.X11-unix \
    nvcr.io/nvidia/l4t-pytorch:r32.4.2-pth1.5-py3

If that still doesn’t work, you might want to try running this outside of container first.

Fantastic! Thanks, Dusty!
It worked!!!.

Now, I am trying to run outside of the L4t-ml container. I have installed
– gym (this requires to install gfortran. Then it worked.)

Then I am getting another roadblock.
So far, I have reinstalled libgl using the following command:
– sudo apt-get install --reinstalllibgl1-mesa-dri

I don’t see any change. Both cases give the same error message below.

Thanks in advance!

python3 reinforcement_q_learning.py
MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri)
libGL error: failed to load driver: swrast
Traceback (most recent call last):
File “reinforcement_q_learning.py”, line 285, in
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),
File “reinforcement_q_learning.py”, line 260, in get_screen
screen = env.render(mode=‘rgb_array’).transpose((2, 0, 1))
File “/home/dave/.local/lib/python3.6/site-packages/gym/envs/classic_control/cartpole.py”, line 174, in render
from gym.envs.classic_control import rendering
File “/home/dave/.local/lib/python3.6/site-packages/gym/envs/classic_control/rendering.py”, line 25, in
from pyglet.gl import *
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/init.py”, line 244, in
import pyglet.window
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/window/init.py”, line 1880, in
gl._create_shadow_window()
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/init.py”, line 220, in _create_shadow_window
_shadow_window = Window(width=1, height=1, visible=False)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/window/xlib/init.py”, line 165, in init
super(XlibWindow, self).init(*args, **kwargs)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/window/init.py”, line 591, in init
context = config.create_context(gl.current_context)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py”, line 204, in create_context
return XlibContextARB(self, share)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py”, line 314, in init
super(XlibContext13, self).init(config, share)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py”, line 218, in init
raise gl.ContextException(‘Could not create GL context’)
pyglet.gl.ContextException: Could not create GL context

Hmm if you reboot and then sudo apt-get install mesa-utils, are you able to run glxgears and glxinfo?

Yes, I reboot and installed mesa-utils.
Yes, glxgears program runs.

However, I am still getting an error.
Thus, i have installed pyglet by
pip3 install pyglet.

Still I am getting the same error below.
I’ve checked again the same DQL python code in the L4t-ml container with gym, python3-pk installed. Yes,it still works.

Please help. Thanks in advance!

$ python3 reinforcement_q_learning.py
MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri)
libGL error: failed to load driver: swrast
Traceback (most recent call last):
File “reinforcement_q_learning.py”, line 285, in
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),
File “reinforcement_q_learning.py”, line 260, in get_screen
screen = env.render(mode=‘rgb_array’).transpose((2, 0, 1))
File “/home/dave/.local/lib/python3.6/site-packages/gym/envs/classic_control/cartpole.py”, line 174, in render
from gym.envs.classic_control import rendering
File “/home/dave/.local/lib/python3.6/site-packages/gym/envs/classic_control/rendering.py”, line 25, in
from pyglet.gl import *
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/init.py”, line 243, in
import pyglet.window
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/window/init.py”, line 1897, in
gl._create_shadow_window()
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/init.py”, line 220, in _create_shadow_window
_shadow_window = Window(width=1, height=1, visible=False)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/window/xlib/init.py”, line 173, in init
super(XlibWindow, self).init(*args, **kwargs)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/window/init.py”, line 606, in init
context = config.create_context(gl.current_context)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py”, line 204, in create_context
return XlibContextARB(self, share)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py”, line 314, in init
super(XlibContext13, self).init(config, share)
File “/home/dave/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py”, line 218, in init
raise gl.ContextException(‘Could not create GL context’)
pyglet.gl.ContextException: Could not create GL context

I have newly flashed SD Card, and I tested DQL Python code again in the same L4t-ml container. It worked!

I am still curious about why it did not work, but I am good now.
I also watched the NX intro video, while building software today. Well done! I really enjoyed.

I plan to try out Isaac SDK 2020.1.

Thanks again, Dusty!