cherry.wrappers

cherry.wrappers.base_wrapper.Wrapper

[Source]

Description

This class allows to chain Environment Wrappers while still being able to access the properties of wrapped wrappers.

Example
env = gym.make('MyEnv-v0')
env = cherry.wrappers.Logger(env)
env = cherry.wrappers.Runner(env)
env.log('asdf', 23)  # Uses log() method from cherry.wrappers.Logger.

action_size property readonly

Description

The number of dimensions of a single action.

discrete_action property readonly

Description

Returns whether the env is vectorized or not.

discrete_state property readonly

Description

Returns whether the env is vectorized or not.

is_vectorized property readonly

Description

Returns whether the env is vectorized or not.

state_size property readonly

Description

The (flattened) size of a single state.

cherry.wrappers.runner_wrapper.Runner

[Source]

Description

Helps collect transitions, given a get_action function.

Example
env = MyEnv()
env = Runner(env)
replay = env.run(lambda x: policy(x), steps=100)
# or
replay = env.run(lambda x: policy(x), episodes=5)

run(self, get_action, steps = None, episodes = None, render = False)

Description

Runner wrapper's run method.

Info

Either use the steps OR the episodes argument.

Arguments
  • get_action (function) - Given a state, returns the action to be taken.
  • steps (int, optional, default=None) - The number of steps to be collected.
  • episodes (int, optional, default=None) - The number of episodes to be collected.

cherry.wrappers.torch_wrapper.Torch

This wrapper converts * actions from Tensors to numpy, * states from lists/numpy to Tensors.

Examples:

action = Categorical(Tensor([1, 2, 3])).sample() env.step(action)

__init__(self, env, device = None, env_device = None) special

cherry.wrappers.reward_clipper_wrapper.RewardClipper

__init__(self, env) special

cherry.wrappers.timestep_wrapper.AddTimestep

Adds a timestep information to the state input.

Modified from Ilya Kostrikov's implementation:

https://github.com/ikostrikov/pytorch-a2c-ppo-acktr/

__init__(self, env = None) special

cherry.wrappers.action_space_scaler_wrapper.ActionSpaceScaler

Scales the action space to be in the range (-clip, clip).

Adapted from Vitchyr Pong's RLkit: https://github.com/vitchyr/rlkit/blob/master/rlkit/envs/wrappers.py#L41

__init__(self, env, clip = 1.0) special

Soon Deprecated

Info

The following wrappers will soon be deprecated because they are available in gym.

cherry.wrappers.logger_wrapper.Logger

Tracks and prints some common statistics about the environment.

__init__(self, env, interval = 1000, episode_interval = 10, title = None, logger = None) special