🚀 Quickstart
Let's quickly create an environment programmatically!
OmniGibson
's workflow is straightforward: define the configuration of scene, object(s), robot(s), and task you'd like to load, and then instantiate our Environment
class with that config.
Let's start with the following:
import omnigibson as og # (1)!
from omnigibson.macros import gm # (2)!
# Start with an empty configuration
cfg = dict()
- All python scripts should start with this line! This allows access to key global variables through the top-level package.
- Global macros (
gm
) can always be accessed directly and modified on the fly!
🏔️ Defining a scene
Next, let's define a scene:
- Our configuration gets parsed automatically and generates the appropriate class instance based on
type
(the string form of the class name). In this case, we're generating the most basic scene, which only consists of a floor plane. Check out all of our availableScene
classes! - In addition to specifying
type
, the remaining keyword-arguments get passed directly into the class constructor. So for the baseScene
class, you could optionally specify"use_floor_plane"
and"floor_plane_visible"
, whereas for the more powerfulInteractiveTraversableScene
class (which loads a curated, preconfigured scene) you can additionally specify options for filtering objects, such as"load_object_categories"
and"load_room_types"
. You can see all available keyword-arguments by viewing the individualScene
class you'd like to load!
🎾 Defining objects
We can optionally define some objects to load into our scene:
cfg["objects"] = [ # (1)!
{
"type": "USDObject", # (2)!
"name": "ghost_stain", # (3)!
"usd_path": f"{gm.ASSET_PATH}/models/stain/stain.usd",
"category": "stain", # (4)!
"visual_only": True, # (5)!
"scale": [1.0, 1.0, 1.0], # (6)!
"position": [1.0, 2.0, 0.001], # (7)!
"orientation": [0, 0, 0, 1.0], # (8)!
},
{
"type": "DatasetObject", # (9)!
"name": "delicious_apple",
"category": "apple",
"model": "agveuv", # (10)!
"position": [0, 0, 1.0],
},
{
"type": "PrimitiveObject", # (11)!
"name": "incredible_box",
"primitive_type": "Cube", # (12)!
"rgba": [0, 1.0, 1.0, 1.0], # (13)!
"scale": [0.5, 0.5, 0.1],
"fixed_base": True, # (14)!
"position": [-1.0, 0, 1.0],
"orientation": [0, 0, 0.707, 0.707],
},
{
"type": "LightObject", # (15)!
"name": "brilliant_light",
"light_type": "Sphere", # (16)!
"intensity": 50000, # (17)!
"radius": 0.1, # (18)!
"position": [3.0, 3.0, 4.0],
},
]
- Unlike the
"scene"
sub-config, we can define an arbitrary number of objects to load, so this is alist
ofdict
istead of a single nesteddict
. OmniGibson
supports multiple object classes, and we showcase an instance of each core class here. AUSDObject
is our most generic object class, and generates an object sourced from theusd_path
argument.- All objects must define the
name
argument! This is becauseOmniGibson
enforces a global unique naming scheme, and so any created objects must have unique names assigned to them. category
is used by all object classes to assign semantic segmentation IDs.visual_only
is used by all object classes and defines whether the object is subject to both gravity and collisions.scale
is used by all object classes and defines the global (x,y,z) relative scale of the object.position
is used by all object classes and defines the initial (x,y,z) position of the object in the global frame.orientation
is used by all object classes and defines the initial (x,y,z,w) quaternion orientation of the object in the global frame.- A
DatasetObject
is an object pulled directly from our BEHAVIOR dataset. It includes metadata and annotations not found on a genericUSDObject
. Note that these assets are encrypted, and thus cannot be created via theUSDObject
class. - Instead of explicitly defining the hardcoded path to the dataset USD model,
model
(in conjunction withcategory
) is used to infer the exact dataset object to load. In this case this is the exact same underlying raw USD asset that was loaded above as aUSDObject
! - A
PrimitiveObject
is a programmatically generated object defining a convex primitive shape. primitive_type
defines what primitive shape to load -- seePrimitiveObject
for available options!- Because this object is programmatically generated, we can also specify the color to assign to this primitive object.
fixed_base
is used by all object classes and determines whether the generated object is fixed relative to the world frame. Useful for fixing in place large objects, such as furniture or structures.- A
LightObject
is a programmatically generated light source. It is used to directly illuminate the given scene. light_type
defines what light shape to load -- seeLightObject
for available options!intensity
defines how bright the generated light source should be.radius
is used bySphere
lights and determines their relative size.
🤖 Defining robots
We can also optionally define robots to load into our scene:
cfg["robots"] = [ # (1)!
{
"type": "Fetch", # (2)!
"name": "baby_robot",
"obs_modalities": ["scan", "rgb", "depth"], # (3)!
},
]
- Like the
"objects"
sub-config, we can define an arbitrary number of robots to load, so this is alist
ofdict
. OmniGibson
supports multiple robot classes, where each class represents a specific robot model. Check out ourrobots
to view all available robot classes!- Execute
print(og.ALL_SENSOR_MODALITIES)
for a list of all available observation modalities!
📋 Defining a task
Lastly, we can optionally define a task to load into our scene. Since we're just getting started, let's load a "Dummy" task (which is the task that is loaded anyways even if we don't explicitly define a task in our config):
cfg["task"] = {
"type": "DummyTask", # (1)!
"termination_config": dict(), # (2)!
"reward_config": dict(), # (3)!
}
- Check out all of
OmniGibson
's available tasks! termination_config
configures the termination conditions for this task. It maps specificTerminationCondition
arguments to their corresponding values to set.reward_config
configures the reward functions for this task. It maps specificRewardFunction
arguments to their corresponding values to set.
🌀 Creating the environment
We're all set! Let's load the config and create our environment:
Once the environment loads, we can interface with our environment similar to OpenAI's Gym interface:
What happens if we have no robot loaded?
Even if we have no robot loaded, we still need to define an "action" to pass into the environment. In this case, our action space is 0, so you can simply pass []
or np.array([])
into the env.step()
call!
my_first_env.py
👀 Looking around
Look around by:
Left-CLICK + Drag
: TiltScroll-Wheel-CLICK + Drag
: PanScroll-Wheel UP / DOWN
: Zoom
Interact with objects by:
Shift + Left-CLICK + Drag
: Apply force on selected object
Or, for more fine-grained control, run:
- This allows you to move the camera precisely with your keyboard, record camera poses, and dynamically modify lights!
Or, for programmatic control, directly set the viewer camera's global pose:
Next: Check out some of OmniGibson
's breadth of features from our Modules pages!