Implicit Shape Control Environments¶
PegInHoleEnv¶
- class gym_agx.envs.implicit.peg_in_hole_env.PegInHoleEnv(n_substeps=1, reward_type='dense', observation_type='state', headless=False, image_size=[64, 64], **kwargs)¶
Peg-in-hole environment.
- render(mode='human')¶
Renders the environment.
The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).
- Note:
- Make sure that your class's metadata 'render.modes' key includes
the list of supported modes. It's recommended to call super() in implementations to use the functionality of this method.
- Args:
mode (str): the mode to render with
Example:
- class MyEnv(Env):
metadata = {'render.modes': ['human', 'rgb_array']}
- def render(self, mode='human'):
- if mode == 'rgb_array':
return np.array(...) # return RGB frame suitable for video
- elif mode == 'human':
... # pop up a window and render
- else:
super(MyEnv, self).render(mode=mode) # just raise an exception
- step(action)¶
Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment's state.
Accepts an action and returns a tuple (observation, reward, done, info).
- Args:
action (object): an action provided by the agent
- Returns:
observation (object): agent's observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
- reset()¶
Resets the environment to an initial state and returns an initial observation.
Note that this function should not reset the environment's random number generator(s); random variables in the environment's state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.
- Returns:
observation (object): the initial observation.
RubberBandEnv¶
- class gym_agx.envs.implicit.rubber_band_env.RubberBandEnv(n_substeps=1, reward_type='dense', observation_type='state', headless=False, image_size=[64, 64], **kwargs)¶
Rubber band environment.
- render(mode='human')¶
Renders the environment.
The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).
- Note:
- Make sure that your class's metadata 'render.modes' key includes
the list of supported modes. It's recommended to call super() in implementations to use the functionality of this method.
- Args:
mode (str): the mode to render with
Example:
- class MyEnv(Env):
metadata = {'render.modes': ['human', 'rgb_array']}
- def render(self, mode='human'):
- if mode == 'rgb_array':
return np.array(...) # return RGB frame suitable for video
- elif mode == 'human':
... # pop up a window and render
- else:
super(MyEnv, self).render(mode=mode) # just raise an exception
- step(action)¶
Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment's state.
Accepts an action and returns a tuple (observation, reward, done, info).
- Args:
action (object): an action provided by the agent
- Returns:
observation (object): agent's observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
- reset()¶
Resets the environment to an initial state and returns an initial observation.
Note that this function should not reset the environment's random number generator(s); random variables in the environment's state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.
- Returns:
observation (object): the initial observation.
CableClosingEnv¶
- class gym_agx.envs.implicit.cable_closing_env.CableClosingEnv(n_substeps=1, reward_type='dense', observation_type='gt', headless=False, **kwargs)¶
Cable closing environment.
- render(mode='human')¶
Renders the environment.
The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).
- Note:
- Make sure that your class's metadata 'render.modes' key includes
the list of supported modes. It's recommended to call super() in implementations to use the functionality of this method.
- Args:
mode (str): the mode to render with
Example:
- class MyEnv(Env):
metadata = {'render.modes': ['human', 'rgb_array']}
- def render(self, mode='human'):
- if mode == 'rgb_array':
return np.array(...) # return RGB frame suitable for video
- elif mode == 'human':
... # pop up a window and render
- else:
super(MyEnv, self).render(mode=mode) # just raise an exception
- step(action)¶
Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment's state.
Accepts an action and returns a tuple (observation, reward, done, info).
- Args:
action (object): an action provided by the agent
- Returns:
observation (object): agent's observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
- reset()¶
Resets the environment to an initial state and returns an initial observation.
Note that this function should not reset the environment's random number generator(s); random variables in the environment's state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.
- Returns:
observation (object): the initial observation.