class
PGBuffer
[source]
PGBuffer
(obs_dim
:Union
[tuple
,int
],act_dim
:Union
[tuple
,int
],size
:int
,gamma
:Optional
[float
]=0.99
,lam
:Optional
[float
]=0.95
)
A buffer for storing trajectories experienced by an agent interacting with the environment, and using Generalized Advantage Estimation (GAE-Lambda) for calculating the advantages of state-action pairs.
This class was written by Joshua Achaim at OpenAI. It was adapted to use PyTorch Tensors instead of NumPy arrays for the observations and actions.
Args:
- obs_dim (tuple or int): Dimensionality of input feature space.
- act_dim (tuple or int): Dimensionality of action space.
- size (int): buffer size.
- gamma (float): reward discount factor.
- lam (float): Lambda parameter for GAE-Lambda advantage estimation
PGBuffer.store
[source]
PGBuffer.store
(obs
:Tensor
,act
:Tensor
,rew
:Union
[int
,float
,array
],val
:Union
[int
,float
,array
],logp
:Union
[float
,array
])
Append one timestep of agent-environment interaction to the buffer.
Args:
- obs (torch.Tensor): Current observation to store.
- act (torch.Tensor): Current action.
- rew (int or float or np.array): Current reward from environment.
- val (int or float or np.array): Value estimate for the current state.
- logp (float or np.array): log probability of chosen action under current policy distribution.
PGBuffer.get
[source]
PGBuffer.get
()
Call this at the end of an epoch to get all of the data from the buffer, with advantages appropriately normalized (shifted to have mean zero and std one). Also, resets some pointers in the buffer.
Returns:
- obs_buf (torch.Tensor): Buffer of observations collected.
- act_buf (torch.Tensor): Buffer of actions taken.
- adv_buf (torch.Tensor): Advantage calculations.
- ret_buf (torch.Tensor): Buffer of earned returns.
- logp_buf (torch.Tensor): Buffer of log probabilities of selected actions.
PGBuffer.finish_path
[source]
PGBuffer.finish_path
(last_val
:Union
[int
,float
,array
,NoneType
]=0
)
Call this at the end of a trajectory, or when one gets cut off by an epoch ending. This looks back in the buffer to where the trajectory started, and uses rewards and value estimates from the whole trajectory to compute advantage estimates with GAE-Lambda, as well as compute the rewards-to-go for each state, to use as the targets for the value function. The "last_val" argument should be 0 if the trajectory ended because the agent reached a terminal state (died), and otherwise should be V(s_T), the value function estimated for the last state. This allows us to bootstrap the reward-to-go calculation to account for timesteps beyond the arbitrary episode horizon (or epoch cutoff).
Args:
- last_val (int or float or np.array): Estimate of rewards-to-go. If trajectory ended, is 0.
class
ReplayBuffer
[source]
ReplayBuffer
(obs_dim
:Union
[tuple
,int
],act_dim
:Union
[tuple
,int
],size
:int
) ::PGBuffer
A replay buffer for off-policy RL agents.
This class is borrowed from OpenAI's SpinningUp package: https://spinningup.openai.com/en/latest/
Args:
- obs_dim (tuple or int): Dimensionality of input feature space.
- act_dim (tuple or int): Dimensionality of action space.
- size (int): buffer size.
ReplayBuffer.store
[source]
ReplayBuffer.store
(obs
:Tensor
,act
:Union
[float
,int
,Tensor
],rew
:Union
[float
,int
],next_obs
:Tensor
,done
:bool
)
Append one timestep of agent-environment interaction to the buffer.
Args:
- obs (torch.Tensor): Current observations.
- act (float or int or torch.Tensor): Current action.
- rew (float or int): Current reward
- next_obs (torch.Tensor): Observations from next environment step.
- done (bool): Whether the episode has reached a terminal state.
ReplayBuffer.sample_batch
[source]
ReplayBuffer.sample_batch
(batch_size
:Optional
[int
]=32
)
Sample a batch of agent-environment interaction from the buffer.
Args:
- batch_size (int): Number of interactions to sample for the batch.
Returns:
- tuple of batch tensors
ReplayBuffer.get
[source]
ReplayBuffer.get
()
Get all contents of the batch.
Returns:
- list of PyTorch Tensors; full contents of the buffer.