vision_utils
RandomScale
Rescale the input PIL.Image to the given size.
Args:
minsize (sequence or int): Desired min output size. If size is a sequence like
(w, h), output size will be matched to this. If size is an int,
smaller edge of the image will be matched to this number.
i.e, if height > width, then image will be rescaled to
(size * height / width, size)
maxsize (sequence or int): Desired max output size. If size is a sequence like
(w, h), output size will be matched to this. If size is an int,
smaller edge of the image will be matched to this number.
i.e, if height > width, then image will be rescaled to
(size * height / width, size)
interpolation (int, optional): Desired interpolation. Default is PIL.Image.BILINEAR
Source code in omnigibson/utils/vision_utils.py
__call__(img)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
img |
Image
|
Image to be scaled. |
required |
Returns:
Type | Description |
---|---|
PIL.Image: Rescaled image. |
Source code in omnigibson/utils/vision_utils.py
Remapper
Remaps values in an image from old_mapping to new_mapping using an efficient key_array. See more details in the remap method.
Source code in omnigibson/utils/vision_utils.py
clear()
remap(old_mapping, new_mapping, image)
Remaps values in the given image from old_mapping to new_mapping using an efficient key_array. If the image contains values that are not in old_mapping, they are remapped to the value in new_mapping that corresponds to 'unlabelled'.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
old_mapping |
dict
|
The old mapping dictionary that maps a set of image values to labels e.g. {1: 'desk', 2: 'chair'}. |
required |
new_mapping |
dict
|
The new mapping dictionary that maps another set of image values to labels, e.g. {5: 'desk', 7: 'chair', 100: 'unlabelled'}. |
required |
image |
ndarray
|
The 2D image to remap, e.g. [[1, 3], [1, 2]]. |
required |
Returns:
Name | Type | Description |
---|---|---|
np.ndarray: The remapped image, e.g. [[5,100],[5,7]]. |
||
dict |
The remapped labels dictionary, e.g. {5: 'desk', 7: 'chair', 100: 'unlabelled'}. |
Source code in omnigibson/utils/vision_utils.py
remap_bbox(semantic_id)
Remaps a semantic id to a new id using the key_array. Args: semantic_id (int): The semantic id to remap. Returns: int: The remapped id.
Source code in omnigibson/utils/vision_utils.py
colorize_bboxes_3d(bbox_3d_data, rgb_image, camera_params)
Project 3D bounding box data onto 2D and colorize the bounding boxes for visualization. Reference: https://forums.developer.nvidia.com/t/mathematical-definition-of-3d-bounding-boxes-annotator-nvidia-omniverse-isaac-sim/223416
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bbox_3d_data |
ndarray
|
3D bounding box data |
required |
rgb_image |
ndarray
|
RGB image |
required |
camera_params |
dict
|
Camera parameters |
required |
Returns:
Type | Description |
---|---|
np.ndarray: RGB image with 3D bounding boxes drawn |
Source code in omnigibson/utils/vision_utils.py
randomize_colors(N, bright=True)
Modified from https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/visualize.py#L59 Generate random colors. To get visually distinct colors, generate them in HSV space then convert to RGB.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
N |
int
|
Number of colors to generate |
required |
Returns:
Name | Type | Description |
---|---|---|
bright |
bool
|
whether to increase the brightness of the colors or not |
Source code in omnigibson/utils/vision_utils.py
segmentation_to_rgb(seg_im, N, colors=None)
Helper function to visualize segmentations as RGB frames. NOTE: assumes that geom IDs go up to N at most - if not, multiple geoms might be assigned to the same color.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seg_im |
W, H)-array
|
Segmentation image |
required |
N |
int
|
Maximum segmentation ID from @seg_im |
required |
colors |
None or list of 3-array
|
If specified, colors to apply to different segmentation IDs. Otherwise, will be generated randomly |
None
|