gym_os2r_real.runtimes¶
gym_os2r_real.runtimes.realtime_runtime¶
- class gym_os2r_real.runtimes.realtime_runtime.RealTimeRuntime(task_cls, agent_rate, task_mode, **kwargs)¶
- Bases: - gym_ignition.base.runtime.Runtime- Implementation of - Runtimefor real-time execution.- Warning - This class is not yet complete. - calibrate()¶
 - close()¶
- Override close in your subclass to perform any necessary cleanup. - Environments will automatically close() themselves when garbage collected or when the program exits. - Return type
- None
 
 - get_state_info(state, action)¶
 - property model: scenario.bindings.monopod.Model¶
- Return type
 
 - render(mode='human', **kwargs)¶
- Renders the environment. - The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is: - human: render to the current display or terminal and return nothing. Usually for human consumption. 
- rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video. 
- ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors). 
 - Note - Make sure that your class’s metadata ‘render.modes’ key includes
- the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method. 
 - Parameters
- mode (str) – the mode to render with 
 - Example: - class MyEnv(Env):
- metadata = {‘render.modes’: [‘human’, ‘rgb_array’]} - def render(self, mode=’human’):
- if mode == ‘rgb_array’:
- return np.array(…) # return RGB frame suitable for video 
- elif mode == ‘human’:
- … # pop up a window and render 
- else:
- super(MyEnv, self).render(mode=mode) # just raise an exception 
 
 
 - Return type
- None
 
 - reset()¶
- Resets the environment to an initial state and returns an initial observation. - Note that this function should not reset the environment’s random number generator(s); random variables in the environment’s state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes. - Returns
- the initial observation. 
- Return type
- observation (object) 
 
 - step(action)¶
- Run one timestep of the environment’s dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment’s state. - Accepts an action and returns a tuple (observation, reward, done, info). - Parameters
- action (object) – an action provided by the agent 
- Returns
- agent’s observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning) 
- Return type
- observation (object) 
 
 - timestamp()¶
- Return the timestamp associated to the execution of the environment. - In real-time environments, the timestamp is the time read from the host system. In simulated environments, the timestamp is the simulated time, which might not match the real-time in the case of a real-time factor different than 1. - Return type
- float
- Returns
- The current environment timestamp. 
 
 - property world: scenario.bindings.monopod.World¶
- Return type
 
 
- gym_os2r_real.runtimes.realtime_runtime.eprint(*args, **kwargs)¶