Environments¶
Main Classes¶
This library contains two main classes, one inheriting from Gym's Env class, and another from the GoalEnv class.
- class gym_agx.envs.agx_env.AgxEnv(scene_path, n_substeps, n_actions, observation_type, image_size, camera_pose, no_graphics, args)¶
Superclass for all AGX Dynamics environments. Initializes AGX, loads scene from file and builds it.
- metadata = {'render.modes': ['osg', 'debug']}¶
- property dt¶
- seed(seed=None)¶
Sets the seed for this env's random number generator(s).
- Note:
Some environments use multiple pseudorandom number generators. We want to capture all such seeds used in order to ensure that there aren't accidental correlations between multiple generators.
- Returns:
- list<bigint>: Returns the list of seeds used in this env's random
number generators. The first value in the list should be the "main" seed, or the value which a reproducer should pass to 'seed'. Often, the main seed equals the provided 'seed', but this won't be true if seed=None, for example.
- step(action)¶
Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment's state.
Accepts an action and returns a tuple (observation, reward, done, info).
- Args:
action (object): an action provided by the agent
- Returns:
observation (object): agent's observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
- reset()¶
Resets the environment to an initial state and returns an initial observation.
Note that this function should not reset the environment's random number generator(s); random variables in the environment's state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.
- Returns:
observation (object): the initial observation.
- close()¶
Override close in your subclass to perform any necessary cleanup.
Environments will automatically close() themselves when garbage collected or when the program exits.
- render(mode='human')¶
Renders the environment.
The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).
- Note:
- Make sure that your class's metadata 'render.modes' key includes
the list of supported modes. It's recommended to call super() in implementations to use the functionality of this method.
- Args:
mode (str): the mode to render with
Example:
- class MyEnv(Env):
metadata = {'render.modes': ['human', 'rgb_array']}
- def render(self, mode='human'):
- if mode == 'rgb_array':
return np.array(...) # return RGB frame suitable for video
- elif mode == 'human':
... # pop up a window and render
- else:
super(MyEnv, self).render(mode=mode) # just raise an exception
- class gym_agx.envs.agx_goal_env.AgxGoalEnv(scene_path, n_substeps, n_actions, observation_config, camera_pose, osg_window, agx_only, args)¶
Superclass for all AGX Dynamics environments. Initializes AGX, loads scene from file and builds it.
- metadata = {'render.modes': ['osg', 'debug', 'human', 'depth']}¶
- property dt¶
- property timestamp¶
- compute_reward(achieved_goal, goal, info)¶
Compute the step reward. This externalizes the reward function and makes it dependent on a desired goal and the one that was achieved. If you wish to include additional rewards that are independent of the goal, you can include the necessary values to derive it in 'info' and compute it accordingly. Args:
achieved_goal (object): the goal that was achieved during execution desired_goal (object): the desired goal that we asked the agent to attempt to achieve info (dict): an info dictionary with additional information
- Returns:
float: The reward that corresponds to the provided achieved goal w.r.t. to the desired goal. Note that the following should always hold true:
ob, reward, done, info = env.step() assert reward == env.compute_reward(ob['achieved_goal'], ob['desired_goal'], info)
- step(action)¶
Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment's state.
Accepts an action and returns a tuple (observation, reward, done, info).
- Args:
action (object): an action provided by the agent
- Returns:
observation (object): agent's observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
- reset()¶
Resets the environment to an initial state and returns an initial observation.
Note that this function should not reset the environment's random number generator(s); random variables in the environment's state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.
- Returns:
observation (object): the initial observation.
- close()¶
Override close in your subclass to perform any necessary cleanup.
Environments will automatically close() themselves when garbage collected or when the program exits.
- render(mode='human')¶
Renders the environment.
The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).
- Note:
- Make sure that your class's metadata 'render.modes' key includes
the list of supported modes. It's recommended to call super() in implementations to use the functionality of this method.
- Args:
mode (str): the mode to render with
Example:
- class MyEnv(Env):
metadata = {'render.modes': ['human', 'rgb_array']}
- def render(self, mode='human'):
- if mode == 'rgb_array':
return np.array(...) # return RGB frame suitable for video
- elif mode == 'human':
... # pop up a window and render
- else:
super(MyEnv, self).render(mode=mode) # just raise an exception
DloEnv Class
Intermediate class which inherits from AgxGoalEnv. Abstracts away several methods which are the same for explicit shape control problems of deformable linear objects (DLOs).
- class gym_agx.envs.dlo_env.DloEnv(args, scene_path, n_substeps, end_effectors, observation_config, camera_config, reward_config, randomized_goal, goal_scene_path, show_goal, osg_window=True, agx_only=False)¶
Superclass for all explicit shape control environments with DLOs.
- render(mode='human')¶
Renders the environment.
The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).
- Note:
- Make sure that your class's metadata 'render.modes' key includes
the list of supported modes. It's recommended to call super() in implementations to use the functionality of this method.
- Args:
mode (str): the mode to render with
Example:
- class MyEnv(Env):
metadata = {'render.modes': ['human', 'rgb_array']}
- def render(self, mode='human'):
- if mode == 'rgb_array':
return np.array(...) # return RGB frame suitable for video
- elif mode == 'human':
... # pop up a window and render
- else:
super(MyEnv, self).render(mode=mode) # just raise an exception
- compute_reward(achieved_goal, desired_goal, info)¶
Compute the step reward. This externalizes the reward function and makes it dependent on a desired goal and the one that was achieved. If you wish to include additional rewards that are independent of the goal, you can include the necessary values to derive it in 'info' and compute it accordingly. Args:
achieved_goal (object): the goal that was achieved during execution desired_goal (object): the desired goal that we asked the agent to attempt to achieve info (dict): an info dictionary with additional information
- Returns:
float: The reward that corresponds to the provided achieved goal w.r.t. to the desired goal. Note that the following should always hold true:
ob, reward, done, info = env.step() assert reward == env.compute_reward(ob['achieved_goal'], ob['desired_goal'], info)