Does headless mode(docker container on cloud) support python environment?

I am using Isaac sim with AWS.

  1. First, I run the Isaac Sim container with an interactive Bash session (Terminal 1 on AWS):

$ sudo docker run --name isaac-sim --entrypoint bash -it --gpus all -e “ACCEPT_EULA=Y” --rm --network=host
-v ~/docker/isaac-sim/cache/kit/nv_shadercache:/isaac-sim/kit/cache/Kit/103.1/6e2a27c0/nv_shadercache:rw
-v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw
-v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw
-v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw
-v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw
-v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw
-v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw
-v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw
-v ~/docker/isaac-sim/documents:/root/Documents:rw
nvcr.io/nvidia/isaac-sim:2022.1.0

  1. Then I start Isaac Sim with native livestream mode (Terminal 1 on AWS):
    $ ./runheadless.native.sh

  2. I Launch the Omniverse Streaming Client (Local)

  3. Then I open another terminal for the running docker container (Terminal 2 on AWS) :

$ sudo docker exec -it isaac-sim /bin/bash

  1. But when I start python script in the docker (Terminal 2 on AWS)
    ./python.sh test.py

But it failed to run.

Is is possible to run python environment (not script editor) with docker container?

Hi. Does the python script works if you do not run ./runheadless.native.sh first or you run the python script on a second container?

After several attempts, it was possible to run a python script. But I have a question, I am following the Creating New RL Envrionment of ISAAC GYM TUTORIALS. After rl training of cartpole, when performing inference, headless=False should be set for visualization. In this case, how can I visualize carpole policy running? I tried to connect to docker container of aws with omniverse streaming client, But i failed to connect.

To view the Isaac Sim UI when using a container, set headless=True and enable_extension("omni.kit.livestream.native"). Take a look at livestream.py as an example.

1 Like

Thanks so much. I will try it now.

Finally, I can run python script and visualize simulation. Thanks so much.

1 Like

I have one more question. I try to use RL extension on docker container. Tutorial code is as below.

# create isaac environment
from omni.isaac.gym.vec_env import VecEnvBase
env = VecEnvBase(headless=False)

# create task and register task
from cartpole_task import CartpoleTask
task = CartpoleTask(name="Cartpole")
env.set_task(task, backend="torch")

# import stable baselines
from stable_baselines3 import PPO

# Run inference on the trained policy
model = PPO.load("ppo_cartpole")
env._world.reset()
obs = env.reset()
while env._simulation_app.is_running():
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)

env.close()

In this case, How can i enable livestreaming for remotely accessing?

For livestreaming, SimulationApp should be used. So I tried to add SimulationApp, but it failed.

Sorry for bothering you. I tried to read RL extension API but detailed explanation is missing.

Hi.

Try running it headless:

env = VecEnvBase(headless=True)

Then also enable the livestream server:

from omni.isaac.core.utils.extensions import enable_extension
enable_extension(“omni.kit.livestream.native”)

1 Like

I tried it. But it is not working perfectly.

image

Omniverse Streaming Client is connected. But object is not loaded and “start” button is not pushed.

terminal display is as below.

But when I input ctrl-c on terminal, the objects are shown on display. But it does not work anymore because it shutdown.

Is there any solution?

Hi. Did you get the issue resolved yet?
Please send us the latest copy of your .py file for us to reproduce.

Hi, I didn’t solve it.

The same problem arises.

How can I get your email?

My code is as below

# create isaac environment

from omni.isaac.gym.vec_env import VecEnvBase

env = VecEnvBase(headless=True)

from omni.isaac.core.utils.extensions import enable_extension

enable_extension("omni.kit.livestream.native")

# create task and register task

from cartpole_task import CartpoleTask

task = CartpoleTask(name="Cartpole")

env.set_task(task, backend="torch")

# import stable baselines

from stable_baselines3 import PPO

# Run inference on the trained policy

model = PPO.load("ppo_cartpole")

env._world.reset()

obs = env.reset()

while env._simulation_app.is_running():

action, _states = model.predict(obs)

obs, rewards, dones, info = env.step(action)

env.close()

Thanks. Is this the edited cartpole_task.py from the RL example?

I’ll try this and file a bug.

Always thank you for your hard work.

1 Like

I think I found what’s missing. The update() call:

while env._simulation_app.is_running():
        action, _states = model.predict(obs)
        obs, rewards, dones, info = env.step(action)
        env._simulation_app.update()
env.close()
1 Like

It works for me now. Thanks so much

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.