💻 Examples
OmniGibson
ships with many demo scripts highlighting its modularity and diverse feature set intended as a set of building blocks enabling your research. Let's try them out!
⚙️ A quick word about macros
Why macros?
Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of OmniGibson
's features can cause unnecessary slowdowns, and so configuring the macros for your specific use case can optimize performance.
For example, Omniverse provides a so-called flatcache
feature which provides significant performance boosts, but cannot be used when fluids or soft bodies are present. So, we ideally should always have gm.USE_FLATCACHE=True
unless we have fluids or soft bodies in our environment.
macros
define a globally available set of magic numbers or flags set throughout OmniGibson
. These can either be directly set in omnigibson.macros.py
, or can be programmatically modified at runtime via:
from omnigibson.macros import gm, macros
gm.<GLOBAL_MACRO> = <VALUE> # (1)!
macros.<OG_DIRECTORY>.<OG_MODULE>.<MODULE_MACRO> = <VALUE> # (2)!
gm
refers to the "global" macros -- i.e.: settings that generally impact the entire OmniGibson
stack. These are usually the only settings you may need to modify.
macros
captures all remaining macros defined throughout OmniGibson
's codebase -- these are often hardcoded default settings or magic numbers defined in a specific module. These can also be overridden, but we recommend inspecting the module first to understand how it is used.
Many of our examples set various macros
settings at the beginning of the script, and is a good way to understand use cases for modifying them!
🌎 Environments
These examples showcase the full OmniGibson
stack in use, and the types of environments immediately supported.
BEHAVIOR Task Demo
This demo is useful for...
- Understanding how to instantiate a BEHAVIOR task
- Understanding how a pre-defined configuration file is used
python -m omnigibson.examples.environments.behavior_env_demo
This demo instantiates one of our BEHAVIOR tasks (and optionally sampling object locations online) in a fully-populated scene and loads a Fetch
robot. The robot executes random actions and the environment is reset periodically.
behavior_env_demo.py
| import os
import yaml
import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.ui_utils import choose_from_options
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Generates a BEHAVIOR Task environment in an online fashion.
It steps the environment 100 times with random actions sampled from the action space,
using the Gym interface, resetting it 10 times.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Ask the user whether they want online object sampling or not
sampling_options = {
False: "Use a pre-sampled cached BEHAVIOR activity scene",
True: "Sample the BEHAVIOR activity in an online fashion",
}
should_sample = choose_from_options(
options=sampling_options, name="online object sampling", random_selection=random_selection
)
# Load the pre-selected configuration and set the online_sampling flag
config_filename = os.path.join(og.example_config_path, "fetch_behavior.yaml")
cfg = yaml.load(open(config_filename, "r"), Loader=yaml.FullLoader)
cfg["task"]["online_object_sampling"] = should_sample
# Load the environment
env = og.Environment(configs=cfg)
# Allow user to move camera more easily
og.sim.enable_viewer_camera_teleoperation()
# Run a simple loop and reset periodically
max_iterations = 10 if not short_exec else 1
for j in range(max_iterations):
og.log.info("Resetting environment")
env.reset()
for i in range(100):
action = env.action_space.sample()
state, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
og.log.info("Episode finished after {} timesteps".format(i + 1))
break
# Always close the environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Navigation Task Demo
This demo is useful for...
- Understanding how to instantiate a navigation task
- Understanding how a pre-defined configuration file is used
python -m omnigibson.examples.environments.navigation_env_demo
This demo instantiates one of our navigation tasks in a fully-populated scene and loads a Turtlebot
robot. The robot executes random actions and the environment is reset periodically.
navigation_env_demo.py
| import os
import yaml
import omnigibson as og
from omnigibson.utils.ui_utils import choose_from_options
def main(random_selection=False, headless=False, short_exec=False):
"""
Prompts the user to select a type of scene and loads a turtlebot into it, generating a Point-Goal navigation
task within the environment.
It steps the environment 100 times with random actions sampled from the action space,
using the Gym interface, resetting it 10 times.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Load the config
config_filename = os.path.join(og.example_config_path, f"turtlebot_nav.yaml")
config = yaml.load(open(config_filename, "r"), Loader=yaml.FullLoader)
# check if we want to quick load or full load the scene
load_options = {
"Quick": "Only load the building assets (i.e.: the floors, walls, ceilings)",
"Full": "Load all interactive objects in the scene",
}
load_mode = choose_from_options(options=load_options, name="load mode", random_selection=random_selection)
if load_mode == "Quick":
config["scene"]["load_object_categories"] = ["floors", "walls", "ceilings"]
# Load the environment
env = og.Environment(configs=config)
# Allow user to move camera more easily
og.sim.enable_viewer_camera_teleoperation()
# Run a simple loop and reset periodically
max_iterations = 10 if not short_exec else 1
for j in range(max_iterations):
og.log.info("Resetting environment")
env.reset()
for i in range(100):
action = env.action_space.sample()
state, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
og.log.info("Episode finished after {} timesteps".format(i + 1))
break
# Always close the environment at the end
og.clear()
if __name__ == "__main__":
main()
|
🧑🏫 Learning
These examples showcase how OmniGibson
can be used to train embodied AI agents.
Reinforcement Learning Demo
This demo is useful for...
- Understanding how to hook up
OmniGibson
to an external algorithm
- Understanding how to train and evaluate a policy
python -m omnigibson.examples.learning.navigation_policy_demo
This demo loads a BEHAVIOR task with a Fetch
robot, and trains / evaluates the agent using Stable Baseline3's PPO algorithm.
navigation_policy_demo.py
| """
Example training code using stable-baselines3 PPO for one BEHAVIOR activity.
Note that due to the sparsity of the reward, this training code will not converge and achieve task success.
This only serves as a starting point that users can further build upon.
"""
import argparse
import os
import time
import yaml
import omnigibson as og
from omnigibson import example_config_path
from omnigibson.macros import gm
from omnigibson.utils.python_utils import meets_minimum_version
try:
import gymnasium as gym
import tensorboard
import torch as th
import torch.nn as nn
from stable_baselines3 import PPO
from stable_baselines3.common.callbacks import CallbackList, CheckpointCallback, EvalCallback
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.preprocessing import maybe_transpose
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor
from stable_baselines3.common.utils import set_random_seed
except ModuleNotFoundError:
og.log.error(
"torch, stable-baselines3, or tensorboard is not installed. "
"See which packages are missing, and then run the following for any missing packages:\n"
"pip install stable-baselines3[extra]\n"
"pip install tensorboard\n"
"pip install shimmy>=0.2.1\n"
"Also, please update gym to >=0.26.1 after installing sb3: pip install gym>=0.26.1"
)
exit(1)
assert meets_minimum_version(gym.__version__, "0.28.1"), "Please install/update gymnasium to version >= 0.28.1"
# We don't need object states nor transitions rules, so we disable them now, and also enable flatcache for maximum speed
gm.ENABLE_OBJECT_STATES = False
gm.ENABLE_TRANSITION_RULES = False
gm.ENABLE_FLATCACHE = True
class CustomCombinedExtractor(BaseFeaturesExtractor):
def __init__(self, observation_space: gym.spaces.Dict):
# We do not know features-dim here before going over all the items,
# so put something dummy for now. PyTorch requires calling
super().__init__(observation_space, features_dim=1)
extractors = {}
self.step_index = 0
self.img_save_dir = "img_save_dir"
os.makedirs(self.img_save_dir, exist_ok=True)
total_concat_size = 0
feature_size = 128
for key, subspace in observation_space.spaces.items():
# For now, only keep RGB observations
if "rgb" in key:
og.log.info(f"obs {key} shape: {subspace.shape}")
n_input_channels = subspace.shape[0] # channel first
cnn = nn.Sequential(
nn.Conv2d(n_input_channels, 4, kernel_size=8, stride=4, padding=0),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(4, 8, kernel_size=4, stride=2, padding=0),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(8, 4, kernel_size=3, stride=1, padding=0),
nn.ReLU(),
nn.Flatten(),
)
test_tensor = th.zeros(subspace.shape)
with th.no_grad():
n_flatten = cnn(test_tensor[None]).shape[1]
fc = nn.Sequential(nn.Linear(n_flatten, feature_size), nn.ReLU())
extractors[key] = nn.Sequential(cnn, fc)
total_concat_size += feature_size
self.extractors = nn.ModuleDict(extractors)
# Update the features dim manually
self._features_dim = total_concat_size
def forward(self, observations) -> th.Tensor:
encoded_tensor_list = []
self.step_index += 1
# self.extractors contain nn.Modules that do all the processing.
for key, extractor in self.extractors.items():
encoded_tensor_list.append(extractor(observations[key]))
feature = th.cat(encoded_tensor_list, dim=1)
return feature
def main():
# Parse args
parser = argparse.ArgumentParser(description="Train or evaluate a PPO agent in BEHAVIOR")
parser.add_argument(
"--checkpoint",
type=str,
default=None,
help="Absolute path to desired PPO checkpoint to load for evaluation",
)
parser.add_argument(
"--eval",
action="store_true",
help="If set, will evaluate the PPO agent found from --checkpoint",
)
args = parser.parse_args()
tensorboard_log_dir = os.path.join("log_dir", time.strftime("%Y%m%d-%H%M%S"))
os.makedirs(tensorboard_log_dir, exist_ok=True)
prefix = ""
seed = 0
# Load config
with open(f"{example_config_path}/turtlebot_nav.yaml", "r") as f:
cfg = yaml.load(f, Loader=yaml.FullLoader)
# Make sure flattened obs and action space is used
cfg["env"]["flatten_action_space"] = True
cfg["env"]["flatten_obs_space"] = True
# Only use RGB obs
cfg["robots"][0]["obs_modalities"] = ["rgb"]
# If we're not eval, turn off the start / goal markers so the agent doesn't see them
if not args.eval:
cfg["task"]["visualize_goal"] = False
env = og.Environment(configs=cfg)
# If we're evaluating, hide the ceilings and enable camera teleoperation so the user can easily
# visualize the rollouts dynamically
if args.eval:
ceiling = env.scene.object_registry("name", "ceilings")
ceiling.visible = False
og.sim.enable_viewer_camera_teleoperation()
# Set the set
set_random_seed(seed)
env.reset()
policy_kwargs = dict(
features_extractor_class=CustomCombinedExtractor,
)
os.makedirs(tensorboard_log_dir, exist_ok=True)
if args.eval:
assert args.checkpoint is not None, "If evaluating a PPO policy, @checkpoint argument must be specified!"
model = PPO.load(args.checkpoint)
og.log.info("Starting evaluation...")
mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=50)
og.log.info("Finished evaluation!")
og.log.info(f"Mean reward: {mean_reward} +/- {std_reward:.2f}")
else:
model = PPO(
"MultiInputPolicy",
env,
verbose=1,
tensorboard_log=tensorboard_log_dir,
policy_kwargs=policy_kwargs,
n_steps=20 * 10,
batch_size=8,
device="cuda",
)
checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=tensorboard_log_dir, name_prefix=prefix)
eval_callback = EvalCallback(eval_env=env, eval_freq=1000, n_eval_episodes=20)
callback = CallbackList([checkpoint_callback, eval_callback])
og.log.debug(model.policy)
og.log.info(f"model: {model}")
og.log.info("Starting training...")
model.learn(
total_timesteps=10000000,
callback=callback,
)
og.log.info("Finished training!")
if __name__ == "__main__":
main()
|
🏔️ Scenes
These examples showcase how to leverage OmniGibson
's large-scale, diverse scenes shipped with the BEHAVIOR dataset.
Scene Selector Demo
This demo is useful for...
- Understanding how to load a scene into
OmniGibson
- Accessing all BEHAVIOR dataset scenes
python -m omnigibson.examples.scenes.scene_selector
This demo lets you choose a scene from the BEHAVIOR dataset, loads it along with a Turtlebot
robot, and cycles the resulting environment periodically.
scene_selector.py
| import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.asset_utils import get_available_g_scenes, get_available_og_scenes
from omnigibson.utils.ui_utils import choose_from_options
# Configure macros for maximum performance
gm.USE_GPU_DYNAMICS = True
gm.ENABLE_FLATCACHE = True
gm.ENABLE_OBJECT_STATES = False
gm.ENABLE_TRANSITION_RULES = False
def main(random_selection=False, headless=False, short_exec=False):
"""
Prompts the user to select any available interactive scene and loads a turtlebot into it.
It steps the environment 100 times with random actions sampled from the action space,
using the Gym interface, resetting it 10 times.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Choose the scene type to load
scene_options = {
"InteractiveTraversableScene": "Procedurally generated scene with fully interactive objects",
# "StaticTraversableScene": "Monolithic scene mesh with no interactive objects",
}
scene_type = choose_from_options(options=scene_options, name="scene type", random_selection=random_selection)
# Choose the scene model to load
scenes = get_available_og_scenes() if scene_type == "InteractiveTraversableScene" else get_available_g_scenes()
scene_model = choose_from_options(options=scenes, name="scene model", random_selection=random_selection)
cfg = {
"scene": {
"type": scene_type,
"scene_model": scene_model,
},
"robots": [
{
"type": "Turtlebot",
"obs_modalities": ["scan", "rgb", "depth"],
"action_type": "continuous",
"action_normalize": True,
},
],
}
# If the scene type is interactive, also check if we want to quick load or full load the scene
if scene_type == "InteractiveTraversableScene":
load_options = {
"Quick": "Only load the building assets (i.e.: the floors, walls, ceilings)",
"Full": "Load all interactive objects in the scene",
}
load_mode = choose_from_options(options=load_options, name="load mode", random_selection=random_selection)
if load_mode == "Quick":
cfg["scene"]["load_object_categories"] = ["floors", "walls", "ceilings"]
# Load the environment
env = og.Environment(configs=cfg)
# Allow user to move camera more easily
if not gm.HEADLESS:
og.sim.enable_viewer_camera_teleoperation()
# Run a simple loop and reset periodically
max_iterations = 10 if not short_exec else 1
for j in range(max_iterations):
og.log.info("Resetting environment")
env.reset()
for i in range(100):
action = env.action_space.sample()
state, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
og.log.info("Episode finished after {} timesteps".format(i + 1))
break
# Always close the environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Scene Tour Demo
This demo is useful for...
- Understanding how to load a scene into
OmniGibson
- Understanding how to generate a trajectory from a set of waypoints
python -m omnigibson.examples.scenes.scene_tour_demo
This demo lets you choose a scene from the BEHAVIOR dataset. It allows you to move the camera using the keyboard, select waypoints, and then programmatically generates a video trajectory from the selected waypoints
scene_tour_demo.py
| import torch as th
import omnigibson as og
import omnigibson.lazy as lazy
from omnigibson.macros import gm
from omnigibson.utils.asset_utils import get_available_g_scenes, get_available_og_scenes
from omnigibson.utils.ui_utils import KeyboardEventHandler, choose_from_options
def main(random_selection=False, headless=False, short_exec=False):
"""
Prompts the user to select any available interactive scene and loads it.
It sets the camera to various poses and records images, and then generates a trajectory from a set of waypoints
and records the resulting video.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Make sure the example is not being run headless. If so, terminate early
if gm.HEADLESS:
print("This demo should only be run not headless! Exiting early.")
og.shutdown()
# Choose the scene type to load
scene_options = {
"InteractiveTraversableScene": "Procedurally generated scene with fully interactive objects",
# "StaticTraversableScene": "Monolithic scene mesh with no interactive objects",
}
scene_type = choose_from_options(options=scene_options, name="scene type", random_selection=random_selection)
# Choose the scene model to load
scenes = get_available_og_scenes() if scene_type == "InteractiveTraversableScene" else get_available_g_scenes()
scene_model = choose_from_options(options=scenes, name="scene model", random_selection=random_selection)
print(f"scene model: {scene_model}")
cfg = {
"scene": {
"type": scene_type,
"scene_model": scene_model,
},
}
# If the scene type is interactive, also check if we want to quick load or full load the scene
if scene_type == "InteractiveTraversableScene":
load_options = {
"Quick": "Only load the building assets (i.e.: the floors, walls, ceilings)",
"Full": "Load all interactive objects in the scene",
}
load_mode = choose_from_options(options=load_options, name="load mode", random_selection=random_selection)
if load_mode == "Quick":
cfg["scene"]["load_object_categories"] = ["floors", "walls", "ceilings"]
# Load the environment
env = og.Environment(configs=cfg)
# Allow user to teleoperate the camera
cam_mover = og.sim.enable_viewer_camera_teleoperation()
# Create a keyboard event handler for generating waypoints
waypoints = []
def add_waypoint():
nonlocal waypoints
pos = cam_mover.cam.get_position_orientation()[0]
print(f"Added waypoint at {pos}")
waypoints.append(pos)
def clear_waypoints():
nonlocal waypoints
print(f"Cleared all waypoints!")
waypoints = []
KeyboardEventHandler.initialize()
KeyboardEventHandler.add_keyboard_callback(
key=lazy.carb.input.KeyboardInput.X,
callback_fn=add_waypoint,
)
KeyboardEventHandler.add_keyboard_callback(
key=lazy.carb.input.KeyboardInput.C,
callback_fn=clear_waypoints,
)
KeyboardEventHandler.add_keyboard_callback(
key=lazy.carb.input.KeyboardInput.J,
callback_fn=lambda: cam_mover.record_trajectory_from_waypoints(
waypoints=th.tensor(waypoints),
per_step_distance=0.02,
fps=30,
steps_per_frame=1,
fpath=None, # This corresponds to the default path inferred from cam_mover.save_dir
),
)
KeyboardEventHandler.add_keyboard_callback(
key=lazy.carb.input.KeyboardInput.ESCAPE,
callback_fn=lambda: og.clear(),
)
# Print out additional keyboard commands
print(f"\t X: Save the current camera pose as a waypoint")
print(f"\t C: Clear all waypoints")
print(f"\t J: Record the camera trajectory from the current set of waypoints")
print(f"\t ESC: Terminate the demo")
# Loop indefinitely
steps = 0
max_steps = -1 if not short_exec else 100
while steps != max_steps:
env.step([])
steps += 1
if __name__ == "__main__":
main()
|
Traversability Map Demo
This demo is useful for...
- Understanding how to leverage traversability map information from BEHAVIOR dataset scenes
python -m omnigibson.examples.scenes.traversability_map_example
This demo lets you choose a scene from the BEHAVIOR dataset, and generates its corresponding traversability map.
traversability_map_example.py
| import os
import cv2
import matplotlib.pyplot as plt
import torch as th
from PIL import Image
import omnigibson as og
from omnigibson.utils.asset_utils import get_available_og_scenes, get_og_scene_path
from omnigibson.utils.ui_utils import choose_from_options
def main(random_selection=False, headless=False, short_exec=False):
"""
Traversable map demo
Loads the floor plan and obstacles for the requested scene, and overlays them in a visual figure such that the
highlighted area reflects the traversable (free-space) area
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
scenes = get_available_og_scenes()
scene_model = choose_from_options(options=scenes, name="scene model", random_selection=random_selection)
print(f"Generating traversability map for scene {scene_model}")
trav_map_size = 200
trav_map_erosion = 2
trav_map = cv2.imread(os.path.join(get_og_scene_path(scene_model), "layout", "floor_trav_0.png"))
trav_map = cv2.resize(trav_map, (trav_map_size, trav_map_size))
trav_map = cv2.erode(trav_map, th.ones((trav_map_erosion, trav_map_erosion)).cpu().numpy())
if not headless:
plt.figure(figsize=(12, 12))
plt.imshow(trav_map)
plt.title(f"Traversable area of {scene_model} scene")
plt.show()
if __name__ == "__main__":
main()
|
🍎 Objects
These examples showcase how to leverage objects in OmniGibson
.
Load Object Demo
This demo is useful for...
- Understanding how to load an object into
OmniGibson
- Accessing all BEHAVIOR dataset asset categories and models
python -m omnigibson.examples.objects.load_object_selector
This demo lets you choose a specific object from the BEHAVIOR dataset, and loads the requested object into an environment.
load_object_selector.py
| import torch as th
import omnigibson as og
from omnigibson.utils.asset_utils import (
get_all_object_categories,
get_all_object_category_models,
get_og_avg_category_specs,
)
from omnigibson.utils.ui_utils import choose_from_options
def main(random_selection=False, headless=False, short_exec=False):
"""
This demo shows how to load any scaled objects from the OG object model dataset
The user selects an object model to load
The objects can be loaded into an empty scene or an interactive scene (OG)
The example also shows how to use the Environment API or directly the Simulator API, loading objects and robots
and executing actions
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
scene_options = ["Scene", "InteractiveTraversableScene"]
scene_type = choose_from_options(options=scene_options, name="scene type", random_selection=random_selection)
# -- Choose the object to load --
# Select a category to load
available_obj_categories = get_all_object_categories()
obj_category = choose_from_options(
options=available_obj_categories, name="object category", random_selection=random_selection
)
# Select a model to load
available_obj_models = get_all_object_category_models(obj_category)
obj_model = choose_from_options(
options=available_obj_models, name="object model", random_selection=random_selection
)
# Load the specs of the object categories, e.g., common scaling factor
avg_category_spec = get_og_avg_category_specs()
# Create and load this object into the simulator
obj_cfg = dict(
type="DatasetObject",
name="obj",
category=obj_category,
model=obj_model,
position=[0, 0, 50.0],
)
cfg = {
"scene": {
"type": scene_type,
},
"objects": [obj_cfg],
}
if scene_type == "InteractiveTraversableScene":
cfg["scene"]["scene_model"] = "Rs_int"
# Create the environment
env = og.Environment(configs=cfg)
# Place the object so it rests on the floor
obj = env.scene.object_registry("name", "obj")
center_offset = obj.get_position_orientation()[0] - obj.aabb_center + th.tensor([0, 0, obj.aabb_extent[2] / 2.0])
obj.set_position_orientation(position=center_offset)
# Step through the environment
max_steps = 100 if short_exec else 10000
for i in range(max_steps):
env.step(th.empty(0))
# Always close the environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Object Visualizer Demo
This demo is useful for...
- Viewing objects' textures as rendered in
OmniGibson
- Viewing articulated objects' range of motion
- Understanding how to reference object instances from the environment
- Understanding how to set object poses and joint states
python -m omnigibson.examples.objects.visualize_object
This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.
visualize_object.py
| import argparse
import math
import torch as th
import omnigibson as og
import omnigibson.utils.transform_utils as T
from omnigibson.utils.asset_utils import get_all_object_categories, get_all_object_category_models
from omnigibson.utils.ui_utils import choose_from_options
def main(random_selection=False, headless=False, short_exec=False):
"""
Visualizes object as specified by its USD path, @usd_path. If None if specified, will instead
result in an object selection from OmniGibson's object dataset
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Assuming that if random_selection=True, headless=True, short_exec=True, we are calling it from tests and we
# do not want to parse args (it would fail because the calling function is pytest "testfile.py")
usd_path = None
if not (random_selection and headless and short_exec):
parser = argparse.ArgumentParser()
parser.add_argument(
"--usd_path",
default=None,
help="USD Model to load",
)
args = parser.parse_args()
usd_path = args.usd_path
# Define objects to load
light0_cfg = dict(
type="LightObject",
light_type="Sphere",
name="sphere_light0",
radius=0.01,
intensity=1e5,
position=[-2.0, -2.0, 2.0],
)
light1_cfg = dict(
type="LightObject",
light_type="Sphere",
name="sphere_light1",
radius=0.01,
intensity=1e5,
position=[-2.0, 2.0, 2.0],
)
# Make sure we have a valid usd path
if usd_path is None:
# Select a category to load
available_obj_categories = get_all_object_categories()
obj_category = choose_from_options(
options=available_obj_categories, name="object category", random_selection=random_selection
)
# Select a model to load
available_obj_models = get_all_object_category_models(obj_category)
obj_model = choose_from_options(
options=available_obj_models, name="object model", random_selection=random_selection
)
kwargs = {
"type": "DatasetObject",
"category": obj_category,
"model": obj_model,
}
else:
kwargs = {
"type": "USDObject",
"usd_path": usd_path,
}
# Import the desired object
obj_cfg = dict(
**kwargs,
name="obj",
usd_path=usd_path,
visual_only=True,
position=[0, 0, 10.0],
)
# Create the scene config to load -- empty scene
cfg = {
"scene": {
"type": "Scene",
},
"objects": [light0_cfg, light1_cfg, obj_cfg],
}
# Create the environment
env = og.Environment(configs=cfg)
# Set camera to appropriate viewing pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-0.00913503, -1.95750906, 1.36407314]),
orientation=th.tensor([0.6350064, 0.0, 0.0, 0.77250687]),
)
# Grab the object references
obj = env.scene.object_registry("name", "obj")
# Standardize the scale of the object so it fits in a [1,1,1] box -- note that we have to stop the simulator
# in order to set the scale
extents = obj.aabb_extent
og.sim.stop()
obj.scale = (th.ones(3) / extents).min()
og.sim.play()
env.step(th.empty(0))
# Move the object so that its center is at [0, 0, 1]
center_offset = obj.get_position_orientation()[0] - obj.aabb_center + th.tensor([0, 0, 1.0])
obj.set_position_orientation(position=center_offset)
# Allow the user to easily move the camera around
og.sim.enable_viewer_camera_teleoperation()
# Rotate the object in place
steps_per_rotate = 360
steps_per_joint = steps_per_rotate / 10
max_steps = 100 if short_exec else 10000
for i in range(max_steps):
z_angle = 2 * math.pi * (i % steps_per_rotate) / steps_per_rotate
quat = T.euler2quat(th.tensor([0, 0, z_angle]))
pos = T.quat2mat(quat) @ center_offset
if obj.n_dof > 0:
frac = (i % steps_per_joint) / steps_per_joint
j_frac = -1.0 + 2.0 * frac if (i // steps_per_joint) % 2 == 0 else 1.0 - 2.0 * frac
obj.set_joint_positions(positions=j_frac * th.ones(obj.n_dof), normalized=True, drive=False)
obj.keep_still()
obj.set_position_orientation(position=pos, orientation=quat)
env.step(th.empty(0))
# Shut down at the end
og.clear()
if __name__ == "__main__":
main()
|
Highlight Object
This demo is useful for...
- Understanding how to highlight individual objects within a cluttered scene
- Understanding how to access groups of objects from the environment
python -m omnigibson.examples.objects.highlight_objects
This demo loads the Rs_int scene and highlights windows on/off repeatedly.
highlight_objects.py
| import torch as th
import omnigibson as og
def main(random_selection=False, headless=False, short_exec=False):
"""
Highlights visually all object instances of windows and then removes the highlighting
It also demonstrates how to apply an action on all instances of objects of a given category
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene
cfg = {
"scene": {
"type": "InteractiveTraversableScene",
"scene_model": "Rs_int",
}
}
# Create the environment
env = og.Environment(configs=cfg)
# Grab all window objects
windows = env.scene.object_registry("category", "window")
# Step environment while toggling window highlighting
i = 0
highlighted = False
max_steps = -1 if not short_exec else 1000
while i != max_steps:
env.step(th.empty(0))
if i % 50 == 0:
highlighted = not highlighted
og.log.info(f"Toggling window highlight to: {highlighted}")
for window in windows:
# Note that this property is R/W!
window.highlighted = highlighted
i += 1
# Always close the environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Draw Object Bounding Box Demo
This demo is useful for...
- Understanding how to access observations from a
GymObservable
object
- Understanding how to access objects' bounding box information
- Understanding how to dynamically modify vision modalities
*[GymObservable]: Environment
, all sensors extending from BaseSensor
, and all objects extending from BaseObject
(which includes all robots extending from BaseRobot
!) are GymObservable
objects!
python -m omnigibson.examples.objects.draw_bounding_box
This demo loads a door object and banana object, and partially obscures the banana with the door. It generates both "loose" and "tight" bounding boxes (where the latter respects occlusions) for both objects, and dumps them to an image on disk.
draw_bounding_box.py
| import matplotlib.pyplot as plt
import torch as th
import omnigibson as og
def main(random_selection=False, headless=False, short_exec=False):
"""
Shows how to obtain the bounding box of an articulated object.
Draws the bounding box around the loaded object, a cabinet, and writes the visualized image to disk at the
current directory named 'bbox_2d_[loose / tight]_img.png'.
NOTE: In the GUI, bounding boxes can be natively viewed by clicking on the sensor ((*)) icon at the top,
and then selecting the appropriate bounding box modalities, and clicking "Show". See:
https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_replicator/visualization.html#the-visualizer
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Specify objects to load
banana_cfg = dict(
type="DatasetObject",
name="banana",
category="banana",
model="vvyyyv",
bounding_box=[0.643, 0.224, 0.269],
position=[-0.906661, -0.545106, 0.136824],
orientation=[0, 0, 0.76040583, -0.6494482],
)
door_cfg = dict(
type="DatasetObject",
name="door",
category="door",
model="ohagsq",
bounding_box=[1.528, 0.064, 1.299],
position=[-2.0, 0, 0.70000001],
orientation=[0, 0, -0.38268343, 0.92387953],
)
# Create the scene config to load -- empty scene with a few objects
cfg = {
"scene": {
"type": "Scene",
},
"objects": [banana_cfg, door_cfg],
}
# Create the environment
env = og.Environment(configs=cfg)
# Set camera to appropriate viewing pose
cam = og.sim.viewer_camera
cam.set_position_orientation(
position=th.tensor([-4.62785, -0.418575, 0.933943]),
orientation=th.tensor([0.52196595, -0.4231939, -0.46640436, 0.5752612]),
)
# Add bounding boxes to camera sensor
bbox_modalities = ["bbox_3d", "bbox_2d_loose", "bbox_2d_tight"]
for bbox_modality in bbox_modalities:
cam.add_modality(bbox_modality)
# Take a few steps to let objects settle
for i in range(100):
env.step(th.empty(0))
# Grab observations from viewer camera and write them to disk
obs, _ = cam.get_obs()
for bbox_modality in bbox_modalities:
# Print out each of the modalities
og.log.info(f"Observation modality {bbox_modality}:\n{obs[bbox_modality]}")
# Also write the 2d loose bounding box to disk
if "3d" not in bbox_modality:
from omnigibson.utils.deprecated_utils import colorize_bboxes
colorized_img = colorize_bboxes(
bboxes_2d_data=obs[bbox_modality], bboxes_2d_rgb=obs["rgb"].cpu().numpy(), num_channels=4
)
fpath = f"{bbox_modality}_img.png"
plt.imsave(fpath, colorized_img)
og.log.info(f"Saving modality [{bbox_modality}] image to: {fpath}")
# Always close environment down at end
og.clear()
if __name__ == "__main__":
main()
|
🌡️ Object States
These examples showcase OmniGibson
's powerful object states functionality, which captures both individual and relational kinematic and non-kinematic states.
Slicing Demo
This demo is useful for...
- Understanding how slicing works in
OmniGibson
- Understanding how to access individual objects once the environment is created
python -m omnigibson.examples.object_states.slicing_demo
This demo spawns an apple on a table with a knife above it, and lets the knife fall to "cut" the apple in half.
slicing_demo.py
| import math
import torch as th
import omnigibson as og
import omnigibson.utils.transform_utils as T
from omnigibson.macros import gm
# Make sure object states and transition rules are enabled
gm.ENABLE_OBJECT_STATES = True
gm.ENABLE_TRANSITION_RULES = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of slicing an apple into two apple slices
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene with table, knife, and apple
table_cfg = dict(
type="DatasetObject",
name="table",
category="breakfast_table",
model="rjgmmy",
bounding_box=[1.36, 1.081, 0.84],
position=[0, 0, 0.58],
)
apple_cfg = dict(
type="DatasetObject",
name="apple",
category="apple",
model="agveuv",
bounding_box=[0.098, 0.098, 0.115],
position=[0.085, 0, 0.92],
)
knife_cfg = dict(
type="DatasetObject",
name="knife",
category="table_knife",
model="lrdmpf",
bounding_box=[0.401, 0.044, 0.009],
position=[0, 0, 20.0],
)
light0_cfg = dict(
type="LightObject",
name="light0",
light_type="Sphere",
radius=0.01,
intensity=4000.0,
position=[1.217, -0.848, 1.388],
)
light1_cfg = dict(
type="LightObject",
name="light1",
light_type="Sphere",
radius=0.01,
intensity=4000.0,
position=[-1.217, 0.848, 1.388],
)
cfg = {
"scene": {
"type": "Scene",
},
"objects": [table_cfg, apple_cfg, knife_cfg, light0_cfg, light1_cfg],
}
# Create the environment
env = og.Environment(configs=cfg)
# Grab reference to apple and knife
apple = env.scene.object_registry("name", "apple")
knife = env.scene.object_registry("name", "knife")
# Update the simulator's viewer camera's pose so it points towards the table
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.544888, -0.412084, 1.11569]),
orientation=th.tensor([0.54757518, 0.27792802, 0.35721896, 0.70378409]),
)
# Let apple settle
for _ in range(50):
env.step(th.empty(0))
knife.keep_still()
knife.set_position_orientation(
position=apple.get_position_orientation()[0] + th.tensor([-0.15, 0.0, 0.2], dtype=th.float32),
orientation=T.euler2quat(th.tensor([-math.pi / 2, 0, 0], dtype=th.float32)),
)
if not short_exec:
input("The knife will fall on the apple and slice it. Press [ENTER] to continue.")
# Step simulation for a bit so that apple is sliced
for i in range(1000):
env.step(th.empty(0))
if not short_exec:
input("Apple has been sliced! Press [ENTER] to terminate the demo.")
# Always close environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Dicing Demo
This demo is useful for...
- Understanding how to leverage the
Dicing
state
- Understanding how to enable objects to be
diceable
python -m omnigibson.examples.object_states.dicing_demo
This demo loads an apple and a knife, and showcases how apple can be diced into smaller chunks with the knife.
dicing_demo.py
| import math
import torch as th
import omnigibson as og
import omnigibson.utils.transform_utils as T
from omnigibson.macros import gm
# Make sure object states, GPU dynamics, and transition rules are enabled
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
gm.ENABLE_TRANSITION_RULES = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of dicing an apple into apple dices
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene with table, knife, and apple
table_cfg = dict(
type="DatasetObject",
name="table",
category="breakfast_table",
model="rjgmmy",
bounding_box=[1.36, 1.08, 0.84],
position=[0, 0, 0.58],
)
apple_cfg = dict(
type="DatasetObject",
name="apple",
category="apple",
model="agveuv",
bounding_box=[0.098, 0.098, 0.115],
position=[0.085, 0, 0.92],
abilities={"diceable": {}},
)
knife_cfg = dict(
type="DatasetObject",
name="knife",
category="table_knife",
model="lrdmpf",
bounding_box=[0.401, 0.044, 0.009],
position=[0, 0, 20.0],
)
light0_cfg = dict(
type="LightObject",
name="light0",
light_type="Sphere",
radius=0.01,
intensity=1e7,
position=[1.217, -0.848, 1.388],
)
light1_cfg = dict(
type="LightObject",
name="light1",
light_type="Sphere",
radius=0.01,
intensity=1e7,
position=[-1.217, 0.848, 1.388],
)
cfg = {
"scene": {
"type": "Scene",
},
"objects": [table_cfg, apple_cfg, knife_cfg, light0_cfg, light1_cfg],
}
# Create the environment
env = og.Environment(configs=cfg)
# Grab reference to apple and knife
apple = env.scene.object_registry("name", "apple")
knife = env.scene.object_registry("name", "knife")
# Update the simulator's viewer camera's pose so it points towards the table
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.544888, -0.412084, 1.11569]),
orientation=th.tensor([0.54757518, 0.27792802, 0.35721896, 0.70378409]),
)
# Let apple settle
for _ in range(50):
env.step(th.empty(0))
knife.keep_still()
knife.set_position_orientation(
position=apple.get_position_orientation()[0] + th.tensor([-0.15, 0.0, 0.2]),
orientation=T.euler2quat(th.tensor([-math.pi / 2, 0, 0])),
)
if short_exec == False:
input("The knife will fall on the apple and dice it. Press [ENTER] to continue.")
# Step simulation for a bit so that apple is diced
for _ in range(1000):
env.step(th.empty(0))
if short_exec == False:
input("Apple has been diced! Press [ENTER] to terminate the demo.")
# Always close simulator at the end
og.clear()
if __name__ == "__main__":
main()
|
Folded and Unfolded Demo
This demo is useful for...
- Understanding how to load a softbody (cloth) version of a BEHAVIOR dataset object
- Understanding how to enable cloth objects to be
foldable
- Understanding the current heuristics used for gauging a cloth's "foldness"
python -m omnigibson.examples.object_states.folded_unfolded_state_demo
This demo loads in three different cloth objects, and allows you to manipulate them while printing out their Folded
state status in real-time. Try manipulating the object by holding down Shift
and then Left-click + Drag
!
folded_unfolded_state_demo.py
| import torch as th
import omnigibson as og
from omnigibson.macros import gm
from omnigibson.object_states import Folded, Unfolded
from omnigibson.utils.constants import PrimType
from omnigibson.utils.python_utils import multi_dim_linspace
# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for cloth)
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of cloth objects that can potentially be folded.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene + custom cloth object
cfg = {
"scene": {
"type": "Scene",
},
"objects": [
{
"type": "DatasetObject",
"name": "carpet",
"category": "carpet",
"model": "ctclvd",
"bounding_box": [0.897, 0.568, 0.012],
"prim_type": PrimType.CLOTH,
"abilities": {"cloth": {}},
"position": [0, 0, 0.5],
},
{
"type": "DatasetObject",
"name": "dishtowel",
"category": "dishtowel",
"model": "dtfspn",
"bounding_box": [0.852, 1.1165, 0.174],
"prim_type": PrimType.CLOTH,
"abilities": {"cloth": {}},
"position": [1, 1, 0.5],
},
{
"type": "DatasetObject",
"name": "shirt",
"category": "t_shirt",
"model": "kvidcx",
"bounding_box": [0.472, 1.243, 1.158],
"prim_type": PrimType.CLOTH,
"abilities": {"cloth": {}},
"position": [-1, 1, 0.5],
"orientation": [0.7071, 0.0, 0.7071, 0.0],
},
],
}
# Create the environment
env = og.Environment(configs=cfg)
# Grab object references
carpet = env.scene.object_registry("name", "carpet")
dishtowel = env.scene.object_registry("name", "dishtowel")
shirt = env.scene.object_registry("name", "shirt")
objs = [carpet, dishtowel, shirt]
# Set viewer camera
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.46382895, -2.66703958, 1.22616824]),
orientation=th.tensor([0.58779174, -0.00231237, -0.00318273, 0.80900271]),
)
def print_state():
folded = carpet.states[Folded].get_value()
unfolded = carpet.states[Unfolded].get_value()
info = "carpet: [folded] %d [unfolded] %d" % (folded, unfolded)
folded = dishtowel.states[Folded].get_value()
unfolded = dishtowel.states[Unfolded].get_value()
info += " || dishtowel: [folded] %d [unfolded] %d" % (folded, unfolded)
folded = shirt.states[Folded].get_value()
unfolded = shirt.states[Unfolded].get_value()
info += " || tshirt: [folded] %d [unfolded] %d" % (folded, unfolded)
print(f"{info}{' ' * (110 - len(info))}", end="\r")
for _ in range(100):
og.sim.step()
print("\nCloth state:\n")
if not short_exec:
# Fold all three cloths along the x-axis
for i in range(3):
obj = objs[i]
pos = obj.root_link.compute_particle_positions()
x_min, x_max = th.min(pos, dim=0).values[0], th.max(pos, dim=0).values[0]
x_extent = x_max - x_min
# Get indices for the bottom 10 percent vertices in the x-axis
indices = th.argsort(pos, dim=0)[:, 0][: (pos.shape[0] // 10)]
start = th.clone(pos[indices])
# lift up a bit
mid = th.clone(start)
mid[:, 2] += x_extent * 0.2
# move towards x_max
end = th.clone(mid)
end[:, 0] += x_extent * 0.9
increments = 25
for ctrl_pts in th.cat(
[multi_dim_linspace(start, mid, increments), multi_dim_linspace(mid, end, increments)]
):
obj.root_link.set_particle_positions(ctrl_pts, idxs=indices)
og.sim.step()
print_state()
# Fold the t-shirt twice again along the y-axis
for direction in [-1, 1]:
obj = shirt
pos = obj.root_link.compute_particle_positions()
y_min, y_max = th.min(pos, dim=0).values[1], th.max(pos, dim=0).values[1]
y_extent = y_max - y_min
if direction == 1:
indices = th.argsort(pos, dim=0)[:, 1][: (pos.shape[0] // 20)]
else:
indices = th.argsort(pos, dim=0)[:, 1][-(pos.shape[0] // 20) :]
start = th.clone(pos[indices])
# lift up a bit
mid = th.clone(start)
mid[:, 2] += y_extent * 0.2
# move towards y_max
end = th.clone(mid)
end[:, 1] += direction * y_extent * 0.4
increments = 25
for ctrl_pts in th.cat(
[multi_dim_linspace(start, mid, increments), multi_dim_linspace(mid, end, increments)]
):
obj.root_link.set_particle_positions(ctrl_pts, idxs=indices)
env.step(th.empty(0))
print_state()
while True:
env.step(th.empty(0))
print_state()
# Shut down env at the end
print()
og.clear()
if __name__ == "__main__":
main()
|
Overlaid Demo
This demo is useful for...
- Understanding how cloth objects can be overlaid on rigid objects
- Understanding current heuristics used for gauging a cloth's "overlaid" status
python -m omnigibson.examples.object_states.overlaid_demo
This demo loads in a carpet on top of a table. The demo allows you to manipulate the carpet while printing out their Overlaid
state status in real-time. Try manipulating the object by holding down Shift
and then Left-click + Drag
!
overlaid_demo.py
| import torch as th
import omnigibson as og
from omnigibson.macros import gm
from omnigibson.object_states import Overlaid
from omnigibson.utils.constants import PrimType
# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for cloth)
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of cloth objects that can be overlaid on rigid objects.
Loads a carpet on top of a table. Initially Overlaid will be True because the carpet largely covers the table.
If you drag the carpet off the table or even just fold it into half, Overlaid will become False.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene + custom cloth object + custom rigid object
cfg = {
"scene": {
"type": "Scene",
},
"objects": [
{
"type": "DatasetObject",
"name": "carpet",
"category": "carpet",
"model": "ctclvd",
"bounding_box": [1.346, 0.852, 0.017],
"prim_type": PrimType.CLOTH,
"abilities": {"cloth": {}},
"position": [0, 0, 1.0],
},
{
"type": "DatasetObject",
"name": "breakfast_table",
"category": "breakfast_table",
"model": "rjgmmy",
"bounding_box": [1.36, 1.081, 0.84],
"prim_type": PrimType.RIGID,
"position": [0, 0, 0.58],
},
],
}
# Create the environment
env = og.Environment(configs=cfg)
# Grab object references
carpet = env.scene.object_registry("name", "carpet")
breakfast_table = env.scene.object_registry("name", "breakfast_table")
# Set camera pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.88215526, -1.40086216, 2.00311063]),
orientation=th.tensor([0.42013364, 0.12342107, 0.25339685, 0.86258043]),
)
max_steps = 100 if short_exec else -1
steps = 0
print("\nTry dragging cloth around with CTRL + Left-Click to see the Overlaid state change:\n")
while steps != max_steps:
print(f"Overlaid {carpet.states[Overlaid].get_value(breakfast_table)} ", end="\r")
env.step(th.empty(0))
steps += 1
# Shut down env at the end
og.clear()
if __name__ == "__main__":
main()
|
Heat Source or Sink Demo
This demo is useful for...
- Understanding how a heat source (or sink) is visualized in
OmniGibson
- Understanding how dynamic fire visuals are generated in real-time
python -m omnigibson.examples.object_states.heat_source_or_sink_demo
This demo loads in a stove and toggles its HeatSource
on and off, showcasing the dynamic fire visuals available in OmniGibson
.
heat_source_or_sink_demo.py
| import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
def main(random_selection=False, headless=False, short_exec=False):
# Create the scene config to load -- empty scene with a stove object added
cfg = {
"scene": {
"type": "Scene",
},
"objects": [
{
"type": "DatasetObject",
"name": "stove",
"category": "stove",
"model": "qbjiva",
"bounding_box": [1.611, 0.769, 1.147],
"abilities": {
"heatSource": {"requires_toggled_on": True},
"toggleable": {},
},
"position": [0, 0, 0.61],
}
],
}
# Create the environment
env = og.Environment(configs=cfg)
# Get reference to stove object
stove = env.scene.object_registry("name", "stove")
# Set camera to appropriate viewing pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-0.0792399, -1.30104, 1.51981]),
orientation=th.tensor([0.54897692, 0.00110359, 0.00168013, 0.83583509]),
)
# Make sure necessary object states are included with the stove
assert object_states.HeatSourceOrSink in stove.states
assert object_states.ToggledOn in stove.states
# Take a few steps so that visibility propagates
for _ in range(5):
env.step(th.empty(0))
# Heat source is off.
print("Heat source is OFF.")
heat_source_state = stove.states[object_states.HeatSourceOrSink].get_value()
assert not heat_source_state
# Toggle on stove, notify user
if not short_exec:
input("Heat source will now turn ON: Press ENTER to continue.")
stove.states[object_states.ToggledOn].set_value(True)
assert stove.states[object_states.ToggledOn].get_value()
# Need to take a step to update the state.
env.step(th.empty(0))
# Heat source is on
heat_source_state = stove.states[object_states.HeatSourceOrSink].get_value()
assert heat_source_state
for _ in range(500):
env.step(th.empty(0))
# Toggle off stove, notify user
if not short_exec:
input("Heat source will now turn OFF: Press ENTER to continue.")
stove.states[object_states.ToggledOn].set_value(False)
assert not stove.states[object_states.ToggledOn].get_value()
for _ in range(200):
env.step(th.empty(0))
# Move stove, notify user
if not short_exec:
input("Heat source is now moving: Press ENTER to continue.")
stove.set_position_orientation(position=th.tensor([0, 1.0, 0.61]))
for i in range(100):
env.step(th.empty(0))
# Toggle on stove again, notify user
if not short_exec:
input("Heat source will now turn ON: Press ENTER to continue.")
stove.states[object_states.ToggledOn].set_value(True)
assert stove.states[object_states.ToggledOn].get_value()
for i in range(500):
env.step(th.empty(0))
# Shutdown environment at end
og.clear()
if __name__ == "__main__":
main()
|
Temperature Demo
This demo is useful for...
- Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
- Understanding how temperature changes are propagated to individual objects from individual heat sources or sinks
python -m omnigibson.examples.object_states.temperature_demo
This demo loads in various heat sources and sinks, and places an apple within close proximity to each of them. As the environment steps, each apple's temperature is printed in real-time, showcasing OmniGibson
's rudimentary temperature dynamics.
temperature_demo.py
| import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of temperature change
Loads a stove, a microwave and an oven, all toggled on, and five frozen apples
The user can move the apples to see them change from frozen, to normal temperature, to cooked and burnt
This demo also shows how to load objects ToggledOn and how to set the initial temperature of an object
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Define specific objects we want to load in with the scene directly
obj_configs = []
# Light
obj_configs.append(
dict(
type="LightObject",
light_type="Sphere",
name="light",
radius=0.01,
intensity=1e8,
position=[-2.0, -2.0, 1.0],
)
)
# Stove
obj_configs.append(
dict(
type="DatasetObject",
name="stove",
category="stove",
model="yhjzwg",
bounding_box=[1.185, 0.978, 1.387],
position=[0, 0, 0.69],
)
)
# Microwave
obj_configs.append(
dict(
type="DatasetObject",
name="microwave",
category="microwave",
model="hjjxmi",
bounding_box=[0.384, 0.256, 0.196],
position=[2.5, 0, 0.10],
)
)
# Oven
obj_configs.append(
dict(
type="DatasetObject",
name="oven",
category="oven",
model="wuinhm",
bounding_box=[1.075, 0.926, 1.552],
position=[-1.25, 0, 0.88],
)
)
# Tray
obj_configs.append(
dict(
type="DatasetObject",
name="tray",
category="tray",
model="xzcnjq",
bounding_box=[0.319, 0.478, 0.046],
position=[-0.25, -0.12, 1.26],
)
)
# Fridge
obj_configs.append(
dict(
type="DatasetObject",
name="fridge",
category="fridge",
model="hivvdf",
bounding_box=[1.065, 1.149, 1.528],
abilities={
"coldSource": {
"temperature": -100.0,
"requires_inside": True,
}
},
position=[1.25, 0, 0.81],
)
)
# 5 Apples
for i in range(5):
obj_configs.append(
dict(
type="DatasetObject",
name=f"apple{i}",
category="apple",
model="agveuv",
bounding_box=[0.065, 0.065, 0.077],
position=[0, i * 0.1, 5.0],
)
)
# Create the scene config to load -- empty scene with desired objects
cfg = {
"scene": {
"type": "Scene",
},
"objects": obj_configs,
}
# Create the environment
env = og.Environment(configs=cfg)
# Get reference to relevant objects
stove = env.scene.object_registry("name", "stove")
microwave = env.scene.object_registry("name", "microwave")
oven = env.scene.object_registry("name", "oven")
tray = env.scene.object_registry("name", "tray")
fridge = env.scene.object_registry("name", "fridge")
apples = list(env.scene.object_registry("category", "apple"))
# Set camera to appropriate viewing pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.46938863, -3.97887141, 1.64106008]),
orientation=th.tensor([0.63311689, 0.00127259, 0.00155577, 0.77405359]),
)
# Let objects settle
for _ in range(25):
env.step(th.empty(0))
# Turn on all scene objects
stove.states[object_states.ToggledOn].set_value(True)
microwave.states[object_states.ToggledOn].set_value(True)
oven.states[object_states.ToggledOn].set_value(True)
# Set initial temperature of the apples to -50 degrees Celsius, and move the apples to different objects
for apple in apples:
apple.states[object_states.Temperature].set_value(-50)
apples[0].states[object_states.Inside].set_value(oven, True)
apples[1].set_position_orientation(
position=stove.states[object_states.HeatSourceOrSink].link.get_position_orientation()[0]
+ th.tensor([0, 0, 0.1])
)
apples[2].states[object_states.OnTop].set_value(tray, True)
apples[3].states[object_states.Inside].set_value(fridge, True)
apples[4].states[object_states.Inside].set_value(microwave, True)
steps = 0
max_steps = -1 if not short_exec else 1000
# Main recording loop
locations = [f"{loc:>20}" for loc in ["Inside oven", "On stove", "On tray", "Inside fridge", "Inside microwave"]]
print()
print(f"{'Apple location:':<20}", *locations)
while steps != max_steps:
env.step(th.empty(0))
temps = [f"{apple.states[object_states.Temperature].get_value():>20.2f}" for apple in apples]
print(f"{'Apple temperature:':<20}", *temps, end="\r")
steps += 1
# Always close env at the end
og.clear()
if __name__ == "__main__":
main()
|
Heated Demo
This demo is useful for...
- Understanding how temperature modifications can cause objects' visual changes
- Understanding how dynamic steam visuals are generated in real-time
python -m omnigibson.examples.object_states.heated_state_demo
This demo loads in three bowls, and immediately sets their temperatures past their Heated
threshold. Steam is generated in real-time from these objects, and then disappears once the temperature of the objects drops below their Heated
threshold.
heated_state_demo.py
| import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
def main(random_selection=False, headless=False, short_exec=False):
# Define object configurations for objects to load -- we want to load a light and three bowls
obj_configs = []
obj_configs.append(
dict(
type="LightObject",
light_type="Sphere",
name="light",
radius=0.01,
intensity=1e8,
position=[-2.0, -2.0, 1.0],
)
)
for i, (scale, x) in enumerate(zip([0.5, 1.0, 2.0], [-0.6, 0, 0.8])):
obj_configs.append(
dict(
type="DatasetObject",
name=f"bowl{i}",
category="bowl",
model="ajzltc",
bounding_box=th.tensor([0.329, 0.293, 0.168]) * scale,
abilities={"heatable": {}},
position=[x, 0, 0.2],
)
)
# Create the scene config to load -- empty scene with light object and bowls
cfg = {
"scene": {
"type": "Scene",
},
"objects": obj_configs,
}
# Create the environment
env = og.Environment(configs=cfg)
# Set camera to appropriate viewing pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.182103, -2.07295, 0.14017]),
orientation=th.tensor([0.77787037, 0.00267566, 0.00216149, 0.62841535]),
)
# Dim the skybox so we can see the bowls' steam effectively
og.sim.skybox.intensity = 100.0
# Grab reference to objects of relevance
objs = list(env.scene.object_registry("category", "bowl"))
def report_states(objs):
for obj in objs:
print("=" * 20)
print("object:", obj.name)
print("temperature:", obj.states[object_states.Temperature].get_value())
print("obj is heated:", obj.states[object_states.Heated].get_value())
# Report default states
print("==== Initial state ====")
report_states(objs)
if not short_exec:
# Notify user that we're about to heat the object
input("Objects will be heated, and steam will slowly rise. Press ENTER to continue.")
# Heated.
for obj in objs:
obj.states[object_states.Temperature].set_value(50)
env.step(th.empty(0))
report_states(objs)
# Take a look at the steam effect.
# After a while, objects will be below the Steam temperature threshold.
print("==== Objects are now heated... ====")
print()
max_iterations = 2000 if not short_exec else 100
for _ in range(max_iterations):
env.step(th.empty(0))
# Also print temperatures
temps = [f"{obj.states[object_states.Temperature].get_value():>7.2f}" for obj in objs]
print(f"obj temps:", *temps, end="\r")
print()
# Objects are not heated anymore.
print("==== Objects are no longer heated... ====")
report_states(objs)
if not short_exec:
# Close environment at the end
input("Demo completed. Press ENTER to shutdown environment.")
og.clear()
if __name__ == "__main__":
main()
|
Onfire Demo
This demo is useful for...
- Understanding how changing onfire state can cause objects' visual changes
- Understanding how onfire can be triggered by nearby onfire objects
python -m omnigibson.examples.object_states.onfire_demo
This demo loads in a stove (toggled on) and two apples. The first apple will be ignited by the stove first, then the second apple will be ignited by the first apple.
onfire_demo.py
| import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of on fire state.
Loads a stove (toggled on), and two apples.
The first apple will be ignited by the stove first, then the second apple will be ignited by the first apple.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Define specific objects we want to load in with the scene directly
obj_configs = []
# Light
obj_configs.append(
dict(
type="LightObject",
light_type="Sphere",
name="light",
radius=0.01,
intensity=1e8,
position=[-2.0, -2.0, 1.0],
)
)
# Stove
obj_configs.append(
dict(
type="DatasetObject",
name="stove",
category="stove",
model="yhjzwg",
bounding_box=[1.185, 0.978, 1.387],
position=[0, 0, 0.69],
)
)
# 2 Apples
for i in range(2):
obj_configs.append(
dict(
type="DatasetObject",
name=f"apple{i}",
category="apple",
model="agveuv",
bounding_box=[0.065, 0.065, 0.077],
position=[0, i * 0.07, 2.0],
abilities={"flammable": {"ignition_temperature": 100, "distance_threshold": 0.5}},
)
)
# Create the scene config to load -- empty scene with desired objects
cfg = {
"scene": {
"type": "Scene",
},
"objects": obj_configs,
}
# Create the environment
env = og.Environment(configs=cfg)
# Get reference to relevant objects
stove = env.scene.object_registry("name", "stove")
apples = list(env.scene.object_registry("category", "apple"))
# Set camera to appropriate viewing pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-0.42246569, -0.34745704, 1.56810353]),
orientation=th.tensor([0.50083786, -0.10407796, -0.17482619, 0.84128772]),
)
# Let objects settle
for _ in range(10):
env.step(th.empty(0))
# Turn on the stove
stove.states[object_states.ToggledOn].set_value(True)
# The first apple will be affected by the stove
apples[0].set_position_orientation(
position=stove.states[object_states.HeatSourceOrSink].link.get_position_orientation()[0]
+ th.tensor([0.11, 0, 0.1])
)
# The second apple will NOT be affected by the stove, but will be affected by the first apple once it's on fire.
apples[1].set_position_orientation(
position=stove.states[object_states.HeatSourceOrSink].link.get_position_orientation()[0]
+ th.tensor([0.32, 0, 0.1])
)
steps = 0
max_steps = -1 if not short_exec else 1000
# Main recording loop
while steps != max_steps:
env.step(th.empty(0))
temps = [f"{apple.states[object_states.Temperature].get_value():>20.2f}" for apple in apples]
print(f"{'Apple temperature:':<20}", *temps, end="\r")
steps += 1
# Always close env at the end
og.clear()
if __name__ == "__main__":
main()
|
Particle Applier and Remover Demo
This demo is useful for...
- Understanding how a
ParticleRemover
or ParticleApplier
object can be generated
- Understanding how particles can be dynamically generated on objects
- Understanding different methods for applying and removing particles via the
ParticleRemover
or ParticleApplier
object
python -m omnigibson.examples.object_states.particle_applier_remover_demo
This demo loads in a washtowel and table and lets you choose the ability configuration to enable the washtowel with. The washtowel will then proceed to either remove and generate particles dynamically on the table while moving.
particle_applier_remover_demo.py
| import torch as th
import omnigibson as og
from omnigibson.macros import gm, macros
from omnigibson.object_states import Covered, ToggledOn
from omnigibson.utils.constants import ParticleModifyMethod
from omnigibson.utils.ui_utils import choose_from_options
# Set macros for this example
macros.object_states.particle_modifier.VISUAL_PARTICLES_REMOVAL_LIMIT = 1000
macros.object_states.particle_modifier.PHYSICAL_PARTICLES_REMOVAL_LIMIT = 8000
macros.object_states.particle_modifier.MAX_VISUAL_PARTICLES_APPLIED_PER_STEP = 4
macros.object_states.particle_modifier.MAX_PHYSICAL_PARTICLES_APPLIED_PER_STEP = 40
macros.object_states.covered.MAX_VISUAL_PARTICLES = 300
# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for fluids)
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
gm.ENABLE_HQ_RENDERING = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of ParticleApplier and ParticleRemover object states, which enable objects to either apply arbitrary
particles and remove arbitrary particles from the simulator, respectively.
Loads an empty scene with a table, and starts clean to allow particles to be applied or pre-covers the table
with particles to be removed. The ParticleApplier / ParticleRemover state is applied to an imported cloth object
and allowed to interact with the table, applying / removing particles from the table.
NOTE: The key difference between ParticleApplier/Removers and ParticleSource/Sinks is that Applier/Removers
requires contact (if using ParticleProjectionMethod.ADJACENCY) or overlap
(if using ParticleProjectionMethod.PROJECTION) in order to spawn / remove particles, and generally only spawn
particles at the contact points. ParticleSource/Sinks are special cases of ParticleApplier/Removers that
always use ParticleProjectionMethod.PROJECTION and always spawn / remove particles within their projection volume,
irregardless of overlap with other objects!
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Choose what configuration to load
modifier_type = choose_from_options(
options={
"particleApplier": "Demo object's ability to apply particles in the simulator",
"particleRemover": "Demo object's ability to remove particles from the simulator",
},
name="particle modifier type",
random_selection=random_selection,
)
particle_types = ["salt", "water"]
particle_type = choose_from_options(
options={name: f"{name} particles will be applied or removed from the simulator" for name in particle_types},
name="particle type",
random_selection=random_selection,
)
table_cfg = dict(
type="DatasetObject",
name="table",
category="breakfast_table",
model="kwmfdg",
bounding_box=[3.402, 1.745, 1.175],
position=[0, 0, 0.98],
)
tool_cfg = dict(
type="DatasetObject",
name="tool",
visual_only=True,
position=[0, 0.3, 5.0],
)
if modifier_type == "particleRemover":
if particle_type == "salt":
# only ask this question if the modifier type is salt particleRemover
method_type = choose_from_options(
options={
"Adjacency": "Close proximity to the object will be used to determine whether particles can be applied / removed",
"Projection": "A Cone or Cylinder shape protruding from the object will be used to determine whether particles can be applied / removed",
},
name="modifier method type",
random_selection=random_selection,
)
else:
# If the particle type is water, the remover is always adjacency type
method_type = "Adjacency"
if method_type == "Adjacency":
# use dishtowel to remove adjacent particles
tool_cfg["category"] = "dishtowel"
tool_cfg["model"] = "dtfspn"
tool_cfg["bounding_box"] = [0.34245, 0.46798, 0.07]
elif method_type == "Projection":
# use vacuum to remove projections particles
tool_cfg["category"] = "vacuum"
tool_cfg["model"] = "wikhik"
else:
# If the modifier type is particleApplier, the applier is always projection type
method_type = "Projection"
if particle_type == "salt":
# use salt shaker to apply salt particles
tool_cfg["category"] = "salt_shaker"
tool_cfg["model"] = "iomwtn"
else:
# use water atomizer to apply water particles
tool_cfg["category"] = "water_atomizer"
tool_cfg["model"] = "lfarai"
# Create the scene config to load -- empty scene with a light and table
cfg = {
"env": {
"rendering_frequency": 60, # for HQ rendering
},
"scene": {
"type": "Scene",
},
"objects": [table_cfg, tool_cfg],
}
# Sanity check inputs: Remover + Adjacency + Fluid will not work because we are using a visual_only
# object, so contacts will not be triggered with this object
# Load the environment, then immediately stop the simulator since we need to add in the modifier object
env = og.Environment(configs=cfg)
og.sim.stop()
# Grab references to table and tool
table = env.scene.object_registry("name", "table")
tool = env.scene.object_registry("name", "tool")
# Set the viewer camera appropriately
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-1.61340969, -1.79803028, 2.53167412]),
orientation=th.tensor([0.46291845, -0.12381886, -0.22679218, 0.84790371]),
)
# Play the simulator and take some environment steps to let the objects settle
og.sim.play()
for _ in range(25):
env.step(th.empty(0))
# If we're removing particles, set the table's covered state to be True
if modifier_type == "particleRemover":
table.states[Covered].set_value(env.scene.get_system(particle_type), True)
# Take a few steps to let particles settle
for _ in range(25):
env.step(th.empty(0))
# If the particle remover/applier is projection type, set the turn on shaker
if method_type == "Projection":
tool.states[ToggledOn].set_value(True)
# Enable camera teleoperation for convenience
og.sim.enable_viewer_camera_teleoperation()
tool.keep_still()
# Set the modifier object to be in position to modify particles
if modifier_type == "particleRemover" and method_type == "Projection":
tool.set_position_orientation(
position=[0, 0.3, 1.45],
orientation=[0, 0, 0, 1.0],
)
elif modifier_type == "particleRemover" and method_type == "Adjacency":
tool.set_position_orientation(
position=[0, 0.3, 1.175],
orientation=[0, 0, 0, 1.0],
)
elif modifier_type == "particleApplier" and particle_type == "water":
tool.set_position_orientation(
position=[0, 0.3, 1.4],
orientation=[0.3827, 0, 0, 0.9239],
)
else:
tool.set_position_orientation(
position=[0, 0.3, 1.5],
orientation=[0.7071, 0, 0.7071, 0],
)
# Move object in square around table
deltas = [
[130, th.tensor([-0.01, 0, 0])],
[60, th.tensor([0, -0.01, 0])],
[130, th.tensor([0.01, 0, 0])],
[60, th.tensor([0, 0.01, 0])],
]
for t, delta in deltas:
for _ in range(t):
tool.set_position_orientation(position=tool.get_position_orientation()[0] + delta)
env.step(th.empty(0))
# Always shut down environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Particle Source and Sink Demo
This demo is useful for...
- Understanding how a
ParticleSource
or ParticleSink
object can be generated
- Understanding how particles can be dynamically generated and destroyed via such objects
python -m omnigibson.examples.object_states.particle_source_sink_demo
This demo loads in a sink, which is enabled with both the ParticleSource and ParticleSink states. The sink's particle source is located at the faucet spout and spawns a continuous stream of water particles, which is then destroyed ("sunk") by the sink's particle sink located at the drain.
Difference between ParticleApplier/Removers
and ParticleSource/Sinks
The key difference between ParticleApplier/Removers
and ParticleSource/Sinks
is that Applier/Removers
requires contact (if using ParticleProjectionMethod.ADJACENCY
) or overlap
(if using ParticleProjectionMethod.PROJECTION
) in order to spawn / remove particles, and generally only spawn
particles at the contact points. ParticleSource/Sinks
are special cases of ParticleApplier/Removers
that
always use ParticleProjectionMethod.PROJECTION
and always spawn / remove particles within their projection volume,
irregardless of overlap with other objects.
particle_source_sink_demo.py
| import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
from omnigibson.utils.constants import ParticleModifyCondition
# Make sure object states are enabled and GPU dynamics are used
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
gm.ENABLE_HQ_RENDERING = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of ParticleSource and ParticleSink object states, which enable objects to either spawn arbitrary
particles and remove arbitrary particles from the simulator, respectively.
Loads an empty scene with a sink, which is enabled with both the ParticleSource and ParticleSink states.
The sink's particle source is located at the faucet spout and spawns a continuous stream of water particles,
which is then destroyed ("sunk") by the sink's particle sink located at the drain.
NOTE: The key difference between ParticleApplier/Removers and ParticleSource/Sinks is that Applier/Removers
requires contact (if using ParticleProjectionMethod.ADJACENCY) or overlap
(if using ParticleProjectionMethod.PROJECTION) in order to spawn / remove particles, and generally only spawn
particles at the contact points. ParticleSource/Sinks are special cases of ParticleApplier/Removers that
always use ParticleProjectionMethod.PROJECTION and always spawn / remove particles within their projection volume,
irregardless of overlap with other objects!
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene
cfg = {
"env": {
"rendering_frequency": 60, # for HQ rendering
},
"scene": {
"type": "Scene",
},
}
# Define objects to load into the environment
sink_cfg = dict(
type="DatasetObject",
name="sink",
category="sink",
model="egwapq",
bounding_box=[2.427, 0.625, 1.2],
abilities={
"toggleable": {},
"particleSource": {
"conditions": {
"water": [
(ParticleModifyCondition.TOGGLEDON, True)
], # Must be toggled on for water source to be active
},
"initial_speed": 0.0, # Water merely falls out of the spout
},
"particleSink": {
"conditions": {
"water": [], # No conditions, always sinking nearby particles
},
},
},
position=[0.0, 0, 0.42],
)
cfg["objects"] = [sink_cfg]
# Create the environment!
env = og.Environment(configs=cfg)
# Set camera to ideal angle for viewing objects
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([0.37860532, -0.65396566, 1.4067066]),
orientation=th.tensor([0.49909498, 0.15201752, 0.24857062, 0.81609284]),
)
# Take a few steps to let the objects settle, and then turn on the sink
for _ in range(10):
env.step(th.empty(0)) # Empty action since no robots are in the scene
sink = env.scene.object_registry("name", "sink")
assert sink.states[object_states.ToggledOn].set_value(True)
# Take a step, and save the state
env.step(th.empty(0))
initial_state = og.sim.dump_state()
# Main simulation loop.
max_steps = 1000
max_iterations = -1 if not short_exec else 1
iteration = 0
try:
while iteration != max_iterations:
# Keep stepping until table or bowl are clean, or we reach 1000 steps
steps = 0
while steps != max_steps:
steps += 1
env.step(th.empty(0))
og.log.info("Max steps reached; resetting.")
# Reset to the initial state
og.sim.load_state(initial_state)
iteration += 1
finally:
# Always shut down environment at the end
og.clear()
if __name__ == "__main__":
main()
|
Kinematics Demo
This demo is useful for...
- Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
- Understanding how to import additional objects after the environment is created
python -m omnigibson.examples.object_states.sample_kinematics_demo
This demo procedurally generates a mini populated scene, spawning in a cabinet and placing boxes in its shelves, and then generating a microwave on a cabinet with a plate and apples sampled both inside and on top of it.
sample_kinematics_demo.py
| import os
import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
from omnigibson.objects import DatasetObject
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo to use the raycasting-based sampler to load objects onTop and/or inside another
Loads a cabinet, a microwave open on top of it, and two plates with apples on top, one inside and one on top of the cabinet
Then loads a shelf and cracker boxes inside of it
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create the scene config to load -- empty scene
cfg = {
"scene": {
"type": "Scene",
},
}
# Define objects we want to sample at runtime
microwave_cfg = dict(
type="DatasetObject",
name="microwave",
category="microwave",
model="hjjxmi",
bounding_box=[0.768, 0.512, 0.392],
)
cabinet_cfg = dict(
type="DatasetObject",
name="cabinet",
category="bottom_cabinet",
model="bamfsz",
bounding_box=[1.075, 1.131, 1.355],
)
plate_cfgs = [
dict(
type="DatasetObject",
name=f"plate{i}",
category="plate",
model="iawoof",
bounding_box=th.tensor([0.20, 0.20, 0.05]),
)
for i in range(2)
]
apple_cfgs = [
dict(
type="DatasetObject",
name=f"apple{i}",
category="apple",
model="agveuv",
bounding_box=[0.065, 0.065, 0.077],
)
for i in range(4)
]
shelf_cfg = dict(
type="DatasetObject",
name=f"shelf",
category="shelf",
model="pkgbcp",
bounding_box=th.tensor([1.0, 0.4, 2.0]),
)
box_cfgs = [
dict(
type="DatasetObject",
name=f"box{i}",
category="box_of_crackers",
model="cmdigf",
bounding_box=th.tensor([0.2, 0.05, 0.3]),
)
for i in range(5)
]
# Compose objects cfg
objects_cfg = [
microwave_cfg,
cabinet_cfg,
*plate_cfgs,
*apple_cfgs,
shelf_cfg,
*box_cfgs,
]
# Update their spawn positions so they don't collide immediately
for i, obj_cfg in enumerate(objects_cfg):
obj_cfg["position"] = [100 + i, 100 + i, 100 + i]
cfg["objects"] = objects_cfg
# Create the environment
env = og.Environment(configs=cfg)
env.step([])
# Sample microwave and boxes
sample_boxes_on_shelf(env)
sample_microwave_plates_apples(env)
max_steps = 100 if short_exec else -1
step = 0
while step != max_steps:
env.step(th.empty(0))
step += 1
# Always close environment at the end
og.clear()
def sample_microwave_plates_apples(env):
microwave = env.scene.object_registry("name", "microwave")
cabinet = env.scene.object_registry("name", "cabinet")
plates = list(env.scene.object_registry("category", "plate"))
apples = list(env.scene.object_registry("category", "apple"))
# Place the cabinet at a pre-determined location on the floor
og.log.info("Placing cabinet on the floor...")
cabinet.set_orientation([0, 0, 0, 1.0])
env.step(th.empty(0))
offset = cabinet.get_position_orientation()[0][2] - cabinet.aabb_center[2]
cabinet.set_position_orientation(position=th.tensor([1.0, 0, cabinet.aabb_extent[2] / 2]) + offset)
env.step(th.empty(0))
# Set microwave on top of the cabinet, open it, and step 100 times
og.log.info("Placing microwave OnTop of the cabinet...")
assert microwave.states[object_states.OnTop].set_value(cabinet, True)
assert microwave.states[object_states.Open].set_value(True)
og.log.info("Microwave placed.")
for _ in range(50):
env.step(th.empty(0))
og.log.info("Placing plates")
n_apples_per_plate = int(len(apples) / len(plates))
for i, plate in enumerate(plates):
# Put the 1st plate in the microwave
if i == 0:
og.log.info(f"Placing plate {i} Inside the microwave...")
assert plate.states[object_states.Inside].set_value(microwave, True)
else:
og.log.info(f"Placing plate {i} OnTop the microwave...")
assert plate.states[object_states.OnTop].set_value(microwave, True)
og.log.info(f"Plate {i} placed.")
for _ in range(50):
env.step(th.empty(0))
og.log.info(f"Placing {n_apples_per_plate} apples OnTop of the plate...")
for j in range(n_apples_per_plate):
apple_idx = i * n_apples_per_plate + j
apple = apples[apple_idx]
assert apple.states[object_states.OnTop].set_value(plate, True)
og.log.info(f"Apple {apple_idx} placed.")
for _ in range(50):
env.step(th.empty(0))
def sample_boxes_on_shelf(env):
shelf = env.scene.object_registry("name", "shelf")
boxes = list(env.scene.object_registry("category", "box_of_crackers"))
# Place the shelf at a pre-determined location on the floor
og.log.info("Placing shelf on the floor...")
shelf.set_orientation([0, 0, 0, 1.0])
env.step(th.empty(0))
offset = shelf.get_position_orientation()[0][2] - shelf.aabb_center[2]
shelf.set_position_orientation(position=th.tensor([-1.0, 0, shelf.aabb_extent[2] / 2]) + offset)
env.step(th.empty(0)) # One step is needed for the object to be fully initialized
og.log.info("Shelf placed.")
for _ in range(50):
env.step(th.empty(0))
og.log.info("Placing boxes...")
for i, box in enumerate(boxes):
box.states[object_states.Inside].set_value(shelf, True)
og.log.info(f"Box {i} placed.")
for _ in range(50):
env.step(th.empty(0))
if __name__ == "__main__":
main()
|
Attachment Demo
This demo is useful for...
- Understanding how to leverage the
Attached
state
- Understanding how to enable objects to be
attachable
python -m omnigibson.examples.object_states.attachment_demo
This demo loads an assembled shelf, and showcases how it can be manipulated to attach and detach parts.
attachment_demo.py
| import torch as th
import yaml
import omnigibson as og
from omnigibson.macros import gm
# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Demo of attachment of different parts of a shelf
"""
cfg = yaml.load(open(f"{og.example_config_path}/default_cfg.yaml", "r"), Loader=yaml.FullLoader)
# Add objects that we want to create
obj_cfgs = []
obj_cfgs.append(
dict(
type="LightObject",
name="light",
light_type="Sphere",
radius=0.01,
intensity=5000,
position=[0, 0, 1.0],
)
)
base_z = 0.2
delta_z = 0.01
idx = 0
obj_cfgs.append(
dict(
type="DatasetObject",
name="shelf_back_panel",
category="shelf_back_panel",
model="gjsnrt",
position=[0, 0, 0.01],
abilities={"attachable": {}},
)
)
idx += 1
obj_cfgs.append(
dict(
type="DatasetObject",
name=f"shelf_side_left",
category="shelf_side",
model="bxfkjj",
position=[-0.4, 0, base_z + delta_z * idx],
abilities={"attachable": {}},
)
)
idx += 1
obj_cfgs.append(
dict(
type="DatasetObject",
name=f"shelf_side_right",
category="shelf_side",
model="yujrmw",
position=[0.4, 0, base_z + delta_z * idx],
abilities={"attachable": {}},
)
)
idx += 1
ys = [-0.93, -0.61, -0.29, 0.03, 0.35, 0.68]
for i in range(6):
obj_cfgs.append(
dict(
type="DatasetObject",
name=f"shelf_shelf_{i}",
category="shelf_shelf",
model="ymtnqa",
position=[0, ys[i], base_z + delta_z * idx],
abilities={"attachable": {}},
)
)
idx += 1
obj_cfgs.append(
dict(
type="DatasetObject",
name="shelf_top_0",
category="shelf_top",
model="pfiole",
position=[0, 1.0, base_z + delta_z * idx],
abilities={"attachable": {}},
)
)
idx += 1
obj_cfgs.append(
dict(
type="DatasetObject",
name=f"shelf_baseboard",
category="shelf_baseboard",
model="hlhneo",
position=[0, -10.97884506, base_z + delta_z * idx],
abilities={"attachable": {}},
)
)
idx += 1
cfg["objects"] = obj_cfgs
env = og.Environment(configs=cfg)
# Set viewer camera pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-1.689292, -2.11718198, 0.93332228]),
orientation=th.tensor([0.57687967, -0.22995655, -0.29022759, 0.72807814]),
)
for _ in range(10):
env.step([])
shelf_baseboard = env.scene.object_registry("name", "shelf_baseboard")
shelf_baseboard.set_position_orientation(position=[0, -0.979, 0.21], orientation=[0, 0, 0, 1])
shelf_baseboard.keep_still()
# Lower the mass of the baseboard - otherwise, the gravity will create enough torque to break the joint
shelf_baseboard.root_link.mass = 0.1
input(
"\n\nShelf parts fall to their correct poses and get automatically attached to the back panel.\n"
"You can try to drag (Shift + Left-CLICK + Drag) parts of the shelf to break it apart (you may need to zoom out and drag with a larger force).\n"
"Press [ENTER] to continue.\n"
)
for _ in range(5000):
og.sim.step()
og.shutdown()
if __name__ == "__main__":
main()
|
Object Texture Demo
This demo is useful for...
- Understanding how different object states can result in texture changes
- Understanding how to enable objects with texture-changing states
- Understanding how to dynamically modify object states
python -m omnigibson.examples.object_states.object_state_texture_demo
This demo loads in a single object, and then dynamically modifies its state so that its texture changes with each modification.
object_state_texture_demo.py
| import torch as th
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm, macros
from omnigibson.utils.constants import ParticleModifyMethod
# Make sure object states are enabled, we're using GPU dynamics, and HQ rendering is enabled
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True
gm.ENABLE_HQ_RENDERING = True
def main(random_selection=False, headless=False, short_exec=False):
# Create the scene config to load -- empty scene plus a cabinet
cfg = {
"env": {
"rendering_frequency": 60, # for HQ rendering
},
"scene": {
"type": "Scene",
"floor_plane_visible": True,
},
"objects": [
{
"type": "DatasetObject",
"name": "cabinet",
"category": "bottom_cabinet",
"model": "zuwvdo",
"bounding_box": [1.595, 0.537, 1.14],
"abilities": {
"freezable": {},
"cookable": {},
"burnable": {},
"saturable": {},
"particleRemover": {
"method": ParticleModifyMethod.ADJACENCY,
"conditions": {
# For a specific particle system, this specifies what conditions are required in order for the
# particle applier / remover to apply / remover particles associated with that system
# The list should contain functions with signature condition() --> bool,
# where True means the condition is satisfied
# In this case, we only allow our cabinet to absorb water, with no conditions needed.
# This is needed for the Saturated ("saturable") state so that we can modify the texture
# according to the water.
# NOTE: This will only change color if gm.ENABLE_HQ_RENDERING and gm.USE_GPU_DYNAMICS is
# enabled!
"water": [],
},
},
},
"position": [0, 0, 0.59],
},
],
}
# Create the environment
env = og.Environment(configs=cfg)
# Set camera to appropriate viewing pose
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([1.7789, -1.68822, 1.13551]),
orientation=th.tensor([0.57065614, 0.20331904, 0.267029, 0.74947212]),
)
# Grab reference to object of interest
obj = env.scene.object_registry("name", "cabinet")
# Make sure all the appropriate states are in the object
assert object_states.Frozen in obj.states
assert object_states.Cooked in obj.states
assert object_states.Burnt in obj.states
assert object_states.Saturated in obj.states
def report_states():
# Make sure states are propagated before printing
for i in range(5):
env.step(th.empty(0))
print("=" * 20)
print("temperature:", obj.states[object_states.Temperature].get_value())
print("obj is frozen:", obj.states[object_states.Frozen].get_value())
print("obj is cooked:", obj.states[object_states.Cooked].get_value())
print("obj is burnt:", obj.states[object_states.Burnt].get_value())
print("obj is soaked:", obj.states[object_states.Saturated].get_value(env.scene.get_system("water")))
print("obj textures:", obj.get_textures())
# Report default states
print("==== Initial state ====")
report_states()
# Notify user that we're about to freeze the object, and then freeze the object
if not short_exec:
input("\nObject will be frozen. Press ENTER to continue.")
obj.states[object_states.Temperature].set_value(-50)
report_states()
# Notify user that we're about to cook the object, and then cook the object
if not short_exec:
input("\nObject will be cooked. Press ENTER to continue.")
obj.states[object_states.Temperature].set_value(100)
report_states()
# Notify user that we're about to burn the object, and then burn the object
if not short_exec:
input("\nObject will be burned. Press ENTER to continue.")
obj.states[object_states.Temperature].set_value(250)
report_states()
# Notify user that we're about to reset the object to its default state, and then reset state
if not short_exec:
input("\nObject will be reset to default state. Press ENTER to continue.")
obj.states[object_states.Temperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)
obj.states[object_states.MaxTemperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)
report_states()
# Notify user that we're about to soak the object, and then soak the object
if not short_exec:
input("\nObject will be saturated with water. Press ENTER to continue.")
obj.states[object_states.Saturated].set_value(env.scene.get_system("water"), True)
report_states()
# Notify user that we're about to unsoak the object, and then unsoak the object
if not short_exec:
input("\nObject will be unsaturated with water. Press ENTER to continue.")
obj.states[object_states.Saturated].set_value(env.scene.get_system("water"), False)
report_states()
# Close environment at the end
if not short_exec:
input("Demo completed. Press ENTER to shutdown environment.")
og.clear()
if __name__ == "__main__":
main()
|
🤖 Robots
These examples showcase how to interact and leverage robot objects in OmniGibson
.
Robot Visualizer Demo
This demo is useful for...
- Understanding how to load a robot into
OmniGibson
after an environment is created
- Accessing all
OmniGibson
robot models
- Viewing robots' low-level joint motion
python -m omnigibson.examples.robots.all_robots_visualizer
This demo iterates over all robots in OmniGibson
, loading each one into an empty scene and randomly moving its joints for a brief amount of time.
all_robots_visualizer.py
| import torch as th
import omnigibson as og
from omnigibson.robots import REGISTERED_ROBOTS
def main(random_selection=False, headless=False, short_exec=False):
"""
Robot demo
Loads all robots in an empty scene, generate random actions
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Create empty scene with no robots in it initially
cfg = {
"scene": {
"type": "Scene",
}
}
env = og.Environment(configs=cfg)
og.sim.stop()
# Iterate over all robots and demo their motion
for robot_name, robot_cls in REGISTERED_ROBOTS.items():
# Create and import robot
robot = robot_cls(
name=robot_name,
obs_modalities=[], # We're just moving robots around so don't load any observation modalities
)
env.scene.add_object(robot)
# At least one step is always needed while sim is playing for any imported object to be fully initialized
og.sim.play()
og.sim.step()
# Reset robot and make sure it's not moving
robot.reset()
robot.keep_still()
# Log information
og.log.info(f"Loaded {robot_name}")
og.log.info(f"Moving {robot_name}")
if not headless:
# Set viewer in front facing robot
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([2.69918369, -3.63686664, 4.57894564]),
orientation=th.tensor([0.39592411, 0.1348514, 0.29286304, 0.85982]),
)
og.sim.enable_viewer_camera_teleoperation()
# Hold still briefly so viewer can see robot
for _ in range(100):
og.sim.step()
# Then apply random actions for a bit
if robot_name not in ["BehaviorRobot"]:
for _ in range(30):
action_lo, action_hi = -0.1, 0.1
action = th.rand(robot.action_dim) * (action_hi - action_lo) + action_lo
if robot_name == "Tiago":
tiago_lo, tiago_hi = -0.1, 0.1
action[robot.base_action_idx] = (
th.rand(len(robot.base_action_idx)) * (tiago_hi - tiago_lo) + tiago_lo
)
for _ in range(10):
env.step(action)
# Stop the simulator and remove the robot
og.sim.stop()
env.scene.remove_object(obj=robot)
# Always shut down the environment cleanly at the end
og.clear()
if __name__ == "__main__":
main()
|
Robot Control Demo
This demo is useful for...
- Understanding how different controllers can be used to control robots
- Understanding how to teleoperate a robot through external commands
python -m omnigibson.examples.robots.robot_control_example
This demo lets you choose a robot and the set of controllers to control the robot, and then lets you teleoperate the robot using your keyboard.
robot_control_example.py
| """
Example script demo'ing robot control.
Options for random actions, as well as selection of robot action space
"""
import torch as th
import omnigibson as og
import omnigibson.lazy as lazy
from omnigibson.macros import gm
from omnigibson.robots import REGISTERED_ROBOTS
from omnigibson.utils.ui_utils import KeyboardRobotController, choose_from_options
CONTROL_MODES = dict(
random="Use autonomous random actions (default)",
teleop="Use keyboard control",
)
SCENES = dict(
Rs_int="Realistic interactive home environment (default)",
empty="Empty environment with no objects",
)
# Don't use GPU dynamics and use flatcache for performance boost
gm.USE_GPU_DYNAMICS = False
gm.ENABLE_FLATCACHE = True
def choose_controllers(robot, random_selection=False):
"""
For a given robot, iterates over all components of the robot, and returns the requested controller type for each
component.
:param robot: BaseRobot, robot class from which to infer relevant valid controller options
:param random_selection: bool, if the selection is random (for automatic demo execution). Default False
:return dict: Mapping from individual robot component (e.g.: base, arm, etc.) to selected controller names
"""
# Create new dict to store responses from user
controller_choices = dict()
# Grab the default controller config so we have the registry of all possible controller options
default_config = robot._default_controller_config
# Iterate over all components in robot
for component, controller_options in default_config.items():
# Select controller
options = list(sorted(controller_options.keys()))
choice = choose_from_options(
options=options, name="{} controller".format(component), random_selection=random_selection
)
# Add to user responses
controller_choices[component] = choice
return controller_choices
def main(random_selection=False, headless=False, short_exec=False, quickstart=False):
"""
Robot control demo with selection
Queries the user to select a robot, the controllers, a scene and a type of input (random actions or teleop)
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Choose scene to load
scene_model = "Rs_int"
if not quickstart:
scene_model = choose_from_options(options=SCENES, name="scene", random_selection=random_selection)
# Choose robot to create
robot_name = "Fetch"
if not quickstart:
robot_name = choose_from_options(
options=list(sorted(REGISTERED_ROBOTS.keys())), name="robot", random_selection=random_selection
)
scene_cfg = dict()
if scene_model == "empty":
scene_cfg["type"] = "Scene"
else:
scene_cfg["type"] = "InteractiveTraversableScene"
scene_cfg["scene_model"] = scene_model
# Add the robot we want to load
robot0_cfg = dict()
robot0_cfg["type"] = robot_name
robot0_cfg["obs_modalities"] = ["rgb"]
robot0_cfg["action_type"] = "continuous"
robot0_cfg["action_normalize"] = True
# Compile config
cfg = dict(scene=scene_cfg, robots=[robot0_cfg])
# Create the environment
env = og.Environment(configs=cfg)
# Choose robot controller to use
robot = env.robots[0]
controller_choices = {
"base": "DifferentialDriveController",
"arm_0": "InverseKinematicsController",
"gripper_0": "MultiFingerGripperController",
"camera": "JointController",
}
if not quickstart:
controller_choices = choose_controllers(robot=robot, random_selection=random_selection)
# Choose control mode
if random_selection:
control_mode = "random"
elif quickstart:
control_mode = "teleop"
else:
control_mode = choose_from_options(options=CONTROL_MODES, name="control mode")
# Update the control mode of the robot
controller_config = {component: {"name": name} for component, name in controller_choices.items()}
robot.reload_controllers(controller_config=controller_config)
# Because the controllers have been updated, we need to update the initial state so the correct controller state
# is preserved
env.scene.update_initial_state()
# Update the simulator's viewer camera's pose so it points towards the robot
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([1.46949, -3.97358, 2.21529]),
orientation=th.tensor([0.56829048, 0.09569975, 0.13571846, 0.80589577]),
)
# Reset environment and robot
env.reset()
robot.reset()
# Create teleop controller
action_generator = KeyboardRobotController(robot=robot)
# Register custom binding to reset the environment
action_generator.register_custom_keymapping(
key=lazy.carb.input.KeyboardInput.R,
description="Reset the robot",
callback_fn=lambda: env.reset(),
)
# Print out relevant keyboard info if using keyboard teleop
if control_mode == "teleop":
action_generator.print_keyboard_teleop_info()
# Other helpful user info
print("Running demo.")
print("Press ESC to quit")
# Loop control until user quits
max_steps = -1 if not short_exec else 100
step = 0
while step != max_steps:
action = (
action_generator.get_random_action() if control_mode == "random" else action_generator.get_teleop_action()
)
env.step(action=action)
step += 1
# Always shut down the environment cleanly at the end
og.clear()
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="Teleoperate a robot in a BEHAVIOR scene.")
parser.add_argument(
"--quickstart",
action="store_true",
help="Whether the example should be loaded with default settings for a quick start.",
)
args = parser.parse_args()
main(quickstart=args.quickstart)
|
Robot Grasping Demo
This demo is useful for...
- Understanding the difference between
physical
and sticky
grasping
- Understanding how to teleoperate a robot through external commands
python -m omnigibson.examples.robots.grasping_mode_example
This demo lets you choose a grasping mode and then loads a Fetch
robot and a cube on a table. You can then teleoperate the robot to grasp the cube, observing the difference is grasping behavior based on the grasping mode chosen. Here, physical
means natural friction is required to hold objects, while sticky
means that objects are constrained to the robot's gripper once contact is made.
grasping_mode_example.py
| """
Example script demo'ing robot manipulation control with grasping.
"""
import torch as th
import omnigibson as og
from omnigibson.macros import gm
from omnigibson.sensors import VisionSensor
from omnigibson.utils.ui_utils import KeyboardRobotController, choose_from_options
GRASPING_MODES = dict(
sticky="Sticky Mitten - Objects are magnetized when they touch the fingers and a CLOSE command is given",
assisted="Assisted Grasping - Objects are magnetized when they touch the fingers, are within the hand, and a CLOSE command is given",
physical="Physical Grasping - No additional grasping assistance applied",
)
# Don't use GPU dynamics and Use flatcache for performance boost
gm.USE_GPU_DYNAMICS = False
gm.ENABLE_FLATCACHE = True
def main(random_selection=False, headless=False, short_exec=False):
"""
Robot grasping mode demo with selection
Queries the user to select a type of grasping mode
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Choose type of grasping
grasping_mode = choose_from_options(options=GRASPING_MODES, name="grasping mode", random_selection=random_selection)
# Create environment configuration to use
scene_cfg = dict(type="Scene")
robot0_cfg = dict(
type="Fetch",
obs_modalities=["rgb"], # we're just doing a grasping demo so we don't need all observation modalities
action_type="continuous",
action_normalize=True,
grasping_mode=grasping_mode,
)
# Define objects to load
table_cfg = dict(
type="DatasetObject",
name="table",
category="breakfast_table",
model="lcsizg",
bounding_box=[0.5, 0.5, 0.8],
fixed_base=True,
position=[0.7, -0.1, 0.6],
orientation=[0, 0, 0.707, 0.707],
)
chair_cfg = dict(
type="DatasetObject",
name="chair",
category="straight_chair",
model="amgwaw",
bounding_box=None,
fixed_base=False,
position=[0.45, 0.65, 0.425],
orientation=[0, 0, -0.9990215, -0.0442276],
)
box_cfg = dict(
type="PrimitiveObject",
name="box",
primitive_type="Cube",
rgba=[1.0, 0, 0, 1.0],
size=0.05,
position=[0.53, -0.1, 0.97],
)
# Compile config
cfg = dict(scene=scene_cfg, robots=[robot0_cfg], objects=[table_cfg, chair_cfg, box_cfg])
# Create the environment
env = og.Environment(configs=cfg)
# Reset the robot
robot = env.robots[0]
robot.set_position_orientation(position=[0, 0, 0])
robot.reset()
robot.keep_still()
# Make the robot's camera(s) high-res
for sensor in robot.sensors.values():
if isinstance(sensor, VisionSensor):
sensor.image_height = 720
sensor.image_width = 720
# Update the simulator's viewer camera's pose so it points towards the robot
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-2.39951, 2.26469, 2.66227]),
orientation=th.tensor([-0.23898481, 0.48475231, 0.75464013, -0.37204802]),
)
# Create teleop controller
action_generator = KeyboardRobotController(robot=robot)
# Print out relevant keyboard info if using keyboard teleop
action_generator.print_keyboard_teleop_info()
# Other helpful user info
print("Running demo with grasping mode {}.".format(grasping_mode))
print("Press ESC to quit")
# Loop control until user quits
max_steps = -1 if not short_exec else 100
step = 0
while step != max_steps:
action = action_generator.get_random_action() if random_selection else action_generator.get_teleop_action()
for _ in range(10):
env.step(action)
step += 1
# Always shut down the environment cleanly at the end
og.clear()
if __name__ == "__main__":
main()
|
Advanced: IK Demo
This demo is useful for...
- Understanding how to construct your own IK functionality using omniverse's native lula library without explicitly utilizing all of OmniGibson's class abstractions
- Understanding how to manipulate the simulator at a lower-level than the main Environment entry point
python -m omnigibson.examples.robots.advanced.ik_example
This demo loads in Fetch
robot and a IK solver to control the robot, and then lets you teleoperate the robot using your keyboard.
ik_example.py
| import argparse
import time
import torch as th
import omnigibson as og
import omnigibson.lazy as lazy
from omnigibson.objects import PrimitiveObject
from omnigibson.robots import Fetch
from omnigibson.scenes import Scene
from omnigibson.utils.control_utils import IKSolver
def main(random_selection=False, headless=False, short_exec=False):
"""
Minimal example of usage of inverse kinematics solver
This example showcases how to construct your own IK functionality using omniverse's native lula library
without explicitly utilizing all of OmniGibson's class abstractions, and also showcases how to manipulate
the simulator at a lower-level than the main Environment entry point.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Assuming that if random_selection=True, headless=True, short_exec=True, we are calling it from tests and we
# do not want to parse args (it would fail because the calling function is pytest "testfile.py")
if not (random_selection and headless and short_exec):
parser = argparse.ArgumentParser()
parser.add_argument(
"--programmatic",
"-p",
dest="programmatic_pos",
action="store_true",
help="if the IK solvers should be used with the GUI or programmatically",
)
args = parser.parse_args()
programmatic_pos = args.programmatic_pos
else:
programmatic_pos = True
# Import scene and robot (Fetch)
scene_cfg = {"type": "Scene"}
# Create Fetch robot
# Note that since we only care about IK functionality, we fix the base (this also makes the robot more stable)
# (any object can also have its fixed_base attribute set to True!)
# Note that since we're going to be setting joint position targets, we also need to make sure the robot's arm joints
# (which includes the trunk) are being controlled using joint positions
robot_cfg = {
"type": "Fetch",
"fixed_base": True,
"controller_config": {
"arm_0": {
"name": "NullJointController",
"motor_type": "position",
}
},
}
cfg = dict(scene=scene_cfg, robots=[robot_cfg])
env = og.Environment(configs=cfg)
# Update the viewer camera's pose so that it points towards the robot
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([4.32248, -5.74338, 6.85436]),
orientation=th.tensor([0.39592, 0.13485, 0.29286, 0.85982]),
)
robot = env.robots[0]
# Set robot base at the origin
robot.set_position_orientation(position=th.tensor([0, 0, 0]), orientation=th.tensor([0, 0, 0, 1]))
# At least one simulation step while the simulator is playing must occur for the robot (or in general, any object)
# to be fully initialized after it is imported into the simulator
og.sim.play()
og.sim.step()
# Make sure none of the joints are moving
robot.keep_still()
# Since this demo aims to showcase how users can directly control the robot with IK,
# we will need to disable the built-in controllers in OmniGibson
robot.control_enabled = False
# Create the IK solver -- note that we are controlling both the trunk and the arm since both are part of the
# controllable kinematic chain for the end-effector!
control_idx = th.cat([robot.trunk_control_idx, robot.arm_control_idx[robot.default_arm]])
ik_solver = IKSolver(
robot_description_path=robot.robot_arm_descriptor_yamls[robot.default_arm],
robot_urdf_path=robot.urdf_path,
reset_joint_pos=robot.get_joint_positions()[control_idx],
eef_name=robot.eef_link_names[robot.default_arm],
)
# Define a helper function for executing specific end-effector commands using the ik solver
def execute_ik(pos, quat=None, max_iter=100):
og.log.info("Querying joint configuration to current marker position")
# Grab the joint positions in order to reach the desired pose target
joint_pos = ik_solver.solve(
target_pos=pos,
target_quat=quat,
tolerance_pos=0.002,
tolerance_quat=0.01,
weight_pos=20.0,
weight_quat=0.05,
max_iterations=max_iter,
initial_joint_pos=robot.get_joint_positions()[control_idx],
)
if joint_pos is not None:
og.log.info("Solution found. Setting new arm configuration.")
robot.set_joint_positions(joint_pos, indices=control_idx, drive=True)
else:
og.log.info("EE position not reachable.")
og.sim.step()
if programmatic_pos or headless:
# Sanity check IK using pre-defined hardcoded positions
query_positions = [[1, 0, 0.8], [1, 1, 1], [0.5, 0.5, 0], [0.5, 0.5, 0.5]]
for query_pos in query_positions:
execute_ik(query_pos)
time.sleep(2)
else:
# Create a visual marker to be moved by the user, representing desired end-effector position
marker = PrimitiveObject(
relative_prim_path=f"/marker",
name="marker",
primitive_type="Sphere",
radius=0.03,
visual_only=True,
rgba=[1.0, 0, 0, 1.0],
)
env.scene.add_object(marker)
# Get initial EE position and set marker to that location
command = robot.get_eef_position()
marker.set_position_orientation(position=command)
og.sim.step()
# Setup callbacks for grabbing keyboard inputs from omni
exit_now = False
def keyboard_event_handler(event, *args, **kwargs):
nonlocal command, exit_now
# Check if we've received a key press or repeat
if (
event.type == lazy.carb.input.KeyboardEventType.KEY_PRESS
or event.type == lazy.carb.input.KeyboardEventType.KEY_REPEAT
):
if event.input == lazy.carb.input.KeyboardInput.ENTER:
# Execute the command
execute_ik(pos=command)
elif event.input == lazy.carb.input.KeyboardInput.ESCAPE:
# Quit
og.log.info("Quit.")
exit_now = True
else:
# We see if we received a valid delta command, and if so, we update our command and visualized
# marker position
delta_cmd = input_to_xyz_delta_command(inp=event.input)
if delta_cmd is not None:
command = command + delta_cmd
marker.set_position_orientation(position=command)
og.sim.step()
# Callback must return True if valid
return True
# Hook up the callback function with omni's user interface
appwindow = lazy.omni.appwindow.get_default_app_window()
input_interface = lazy.carb.input.acquire_input_interface()
keyboard = appwindow.get_keyboard()
sub_keyboard = input_interface.subscribe_to_keyboard_events(keyboard, keyboard_event_handler)
# Print out helpful information to the user
print_message()
# Loop until the user requests an exit
while not exit_now:
og.sim.step()
# Always shut the simulation down cleanly at the end
og.clear()
def input_to_xyz_delta_command(inp, delta=0.01):
mapping = {
lazy.carb.input.KeyboardInput.W: th.tensor([delta, 0, 0]),
lazy.carb.input.KeyboardInput.S: th.tensor([-delta, 0, 0]),
lazy.carb.input.KeyboardInput.DOWN: th.tensor([0, 0, -delta]),
lazy.carb.input.KeyboardInput.UP: th.tensor([0, 0, delta]),
lazy.carb.input.KeyboardInput.A: th.tensor([0, delta, 0]),
lazy.carb.input.KeyboardInput.D: th.tensor([0, -delta, 0]),
}
return mapping.get(inp)
def print_message():
print("*" * 80)
print("Move the marker to a desired position to query IK and press ENTER")
print("W/S: move marker further away or closer to the robot")
print("A/D: move marker to the left or the right of the robot")
print("UP/DOWN: move marker up and down")
print("ESC: quit")
if __name__ == "__main__":
main()
|
🧰 Simulator
These examples showcase useful functionality from OmniGibson
's monolithic Simulator
object.
What's the difference between Environment
and Simulator
?
The Simulator
class is a lower-level object that:
- handles importing scenes and objects into the actual simulation
- directly interfaces with the underlying physics engine
The Environment
class thinly wraps the Simulator
's core functionality, by:
- providing convenience functions for automatically importing a predefined scene, object(s), and robot(s) (via the
cfg
argument), as well as a task
- providing a OpenAI Gym interface for stepping through the simulation
While most of the core functionality in Environment
(as well as more fine-grained physics control) can be replicated via direct calls to Simulator
(og.sim
), it requires deeper understanding of OmniGibson
's infrastructure and is not recommended for new users.
State Saving and Loading Demo
This demo is useful for...
- Understanding how to interact with objects using the mouse
- Understanding how to save the active simulator state to a file
- Understanding how to restore the simulator state from a given file
python -m omnigibson.examples.simulator.sim_save_load_example
This demo loads a stripped-down scene with the Turtlebot
robot, and lets you interact with objects to modify the scene. The state is then saved, written to a .json
file, and then restored in the simulation.
sim_save_load_example.py
| import os
import torch as th
import omnigibson as og
import omnigibson.lazy as lazy
from omnigibson.utils.ui_utils import KeyboardEventHandler
TEST_OUT_PATH = "" # Define output directory here.
def main(random_selection=False, headless=False, short_exec=False):
"""
Prompts the user to select whether they are saving or loading an environment, and interactively
shows how an environment can be saved or restored.
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
cfg = {
"scene": {
"type": "InteractiveTraversableScene",
"scene_model": "Rs_int",
"load_object_categories": ["floors", "walls", "bed", "bottom_cabinet", "chair"],
},
"robots": [
{
"type": "Turtlebot",
"obs_modalities": ["rgb", "depth"],
},
],
}
# Create the environment
env = og.Environment(configs=cfg)
# Set the camera to a good angle
def set_camera_pose():
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([-0.229375, -3.40576, 7.26143]),
orientation=th.tensor([0.27619733, -0.00230233, -0.00801152, 0.9610648]),
)
set_camera_pose()
# Give user instructions, and then loop until completed
completed = short_exec
if not short_exec and not random_selection:
# Notify user to manipulate environment until ready, then press Z to exit
print()
print("Modify the scene by SHIFT + left clicking objects and dragging them. Once finished, press Z.")
# Register callback so user knows to press space once they're done manipulating the scene
def complete_loop():
nonlocal completed
completed = True
KeyboardEventHandler.add_keyboard_callback(lazy.carb.input.KeyboardInput.Z, complete_loop)
while not completed:
action_lo, action_hi = -1, 1
env.step(th.rand(env.robots[0].action_dim) * (action_hi - action_lo) + action_lo)
print("Completed scene modification, saving scene...")
save_path = os.path.join(TEST_OUT_PATH, "saved_stage.json")
og.sim.save(json_paths=[save_path])
print("Re-loading scene...")
og.clear()
og.sim.restore(scene_files=[save_path])
# env is no longer valid after og.clear()
del env
# Take a sim step and play
og.sim.step()
og.sim.play()
set_camera_pose()
# Loop until user terminates
completed = short_exec
if not short_exec and not random_selection:
# Notify user to manipulate environment until ready, then press Z to exit
print()
print("View reloaded scene. Once finished, press Z.")
# Register callback so user knows to press space once they're done manipulating the scene
KeyboardEventHandler.add_keyboard_callback(lazy.carb.input.KeyboardInput.Z, complete_loop)
while not completed:
og.sim.step()
if __name__ == "__main__":
main()
|
🖼️ Rendering
These examples showcase how to change renderer settings in OmniGibson
.
Renderer Settings Demo
This demo is useful for...
- Understanding how to use RendererSettings class
python -m omnigibson.examples.renderer_settings.renderer_settings_example
This demo iterates over different renderer settings of and shows how they can be programmatically set with OmniGibson
interface.
renderer_settings_example.py
| import torch as th
import omnigibson as og
from omnigibson.renderer_settings.renderer_settings import RendererSettings
def main(random_selection=False, headless=False, short_exec=False):
"""
Shows how to use RendererSettings class
"""
og.log.info(f"Demo {__file__}\n " + "*" * 80 + "\n Description:\n" + main.__doc__ + "*" * 80)
# Specify objects to load
banana_cfg = dict(
type="DatasetObject",
name="banana",
category="banana",
model="vvyyyv",
scale=[3.0, 5.0, 2.0],
position=[-0.906661, -0.545106, 0.136824],
orientation=[0, 0, 0.76040583, -0.6494482],
)
door_cfg = dict(
type="DatasetObject",
name="door",
category="door",
model="ohagsq",
position=[-2.0, 0, 0.70000001],
orientation=[0, 0, -0.38268343, 0.92387953],
)
# Create the scene config to load -- empty scene with a few objects
cfg = {
"scene": {
"type": "Scene",
},
"objects": [banana_cfg, door_cfg],
}
# Create the environment
env = og.Environment(configs=cfg)
# Set camera to appropriate viewing pose
cam = og.sim.viewer_camera
cam.set_position_orientation(
position=th.tensor([-4.62785, -0.418575, 0.933943]),
orientation=th.tensor([0.52196595, -0.4231939, -0.46640436, 0.5752612]),
)
def steps(n):
for _ in range(n):
env.step(th.empty(0))
# Take a few steps to let objects settle
steps(25)
# Create renderer settings object.
renderer_setting = RendererSettings()
# RendererSettings is a singleton.
renderer_setting2 = RendererSettings()
assert renderer_setting == renderer_setting2
# Set current renderer.
if not short_exec:
input("Setting renderer to Real-Time. Press [ENTER] to continue.")
renderer_setting.set_current_renderer("Real-Time")
assert renderer_setting.get_current_renderer() == "Real-Time"
steps(5)
if not short_exec:
input("Setting renderer to Interactive (Path Tracing). Press [ENTER] to continue.")
renderer_setting.set_current_renderer("Interactive (Path Tracing)")
assert renderer_setting.get_current_renderer() == "Interactive (Path Tracing)"
steps(5)
# Get all available settings.
print(renderer_setting.settings.keys())
if not short_exec:
input(
"Showcasing how to use RendererSetting APIs. Please see example script for more information. "
"Press [ENTER] to continue."
)
# Set setting (2 lines below are equivalent).
renderer_setting.set_setting(path="/app/renderer/skipMaterialLoading", value=True)
renderer_setting.common_settings.materials_settings.skip_material_loading.set(True)
# Get setting (3 lines below are equivalent).
assert renderer_setting.get_setting_from_path(path="/app/renderer/skipMaterialLoading") == True
assert renderer_setting.common_settings.materials_settings.skip_material_loading.value == True
assert renderer_setting.common_settings.materials_settings.skip_material_loading.get() == True
# Reset setting (2 lines below are equivalent).
renderer_setting.reset_setting(path="/app/renderer/skipMaterialLoading")
renderer_setting.common_settings.materials_settings.skip_material_loading.reset()
assert renderer_setting.get_setting_from_path(path="/app/renderer/skipMaterialLoading") == False
# Set setting to an unallowed value using top-level method.
# Examples below will use the "top-level" setting method.
try:
renderer_setting.set_setting(path="/app/renderer/skipMaterialLoading", value="foo")
except AssertionError as e:
print(e) # All good. We got an AssertionError.
# Set setting to a value out-of-range.
try:
renderer_setting.set_setting(path="/rtx/fog/fogColorIntensity", value=0.0)
except AssertionError as e:
print(e) # All good. We got an AssertionError.
# Set unallowed setting.
try:
renderer_setting.set_setting(path="foo", value="bar")
except NotImplementedError as e:
print(e) # All good. We got a NotImplementedError.
# Set setting but the setting group is not enabled.
# Setting is successful but there will be a warning message printed.
renderer_setting.set_setting(path="/rtx/fog/fogColorIntensity", value=1.0)
# Shutdown sim
if not short_exec:
input("Completed demo. Press [ENTER] to shutdown simulation.")
og.clear()
if __name__ == "__main__":
main()
|