Giter VIP home page Giter VIP logo

gymnasium's People

Contributors

christopherhesse avatar elliottower avatar gdb avatar gianlucadecola avatar iaroslav-ai avatar ikamensh avatar instance01 avatar jessefarebro avatar jietang avatar jjshoots avatar jkterry1 avatar jonasschneider avatar joschu avatar kallinteris-andreas avatar markus28 avatar mgoulao avatar nottombrown avatar olegklimov avatar ppaquette avatar pseudo-rnd-thoughts avatar pzhokhov avatar rafaelcosman avatar redtachyon avatar siemanko avatar tlbtlbtlb avatar trigaten avatar tristandeleu avatar vwxyzjn avatar younik avatar zuoxingdong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gymnasium's Issues

[Question] What is the recommanded way to return multiple image as observation?

Question

I am writing my very first custom environment, which currently runs into some trouble. The environment needs to return 2 images as observation. Since they are logically representing different things, it is not reasonable to concatenate them side by side.

My current solution is to use a Box space that has shape (96,96,3,2) and returns stacked images. However, the env_checker gives a warning: WARN: A Box observation space has an unconventional shape (neither an image, nor a 1D vector).

I currently come up with several solutions but I am not sure which is the recommended one:

  1. Just set disable_env_checker to True when registering the environment using gym.register. I do not think this is actually a solution. Also, I think many other checkers provided by the env_checker are still helpful and should not be disabled due to this problem.
  2. Stack the 2 images in the channel dimension, i.e. change the output shape to be (96,96,6). Since the current env_checker does not check the number of channels, it should be able to pass the check. But the problem is that it seems really weird to me to have an 'image' that has 6 channels, and the first 3 channels are even independent with the last 3.
  3. I can use one of the composite spaces such as Dict or Tuple. This is a nice choice for writing an environment, but I have to first warp it into a Dict or something else in the environment and then transform it back into a tensor of shape (96,96,3,2) inside my model, which seems to be kind of redundant.

Are there any suggestions on which solution is recommended or a better choice, or are there any other ways that can solve this?

[Proposal] Seed for `generate_random_map()` in FrozenLake?

Proposal

Could be useful to be able to pass a seed to the generate_random_map() function in FrozenLake.

Motivation

Being able to get reproducible generated maps.

Pitch

Being able to call generate_random_map(size=8, seed=123).

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

new simple wrapper for external environment parameters

Situation: some times there are already done benchmarks using environments in code that you don't want/can modify and you have a replacement new environment {env_name} }that requires some extra parameters for the make function.

A simple (I guess convenient, as I am using it and I am happy) is just to write a .json config file (e.g. {envname}_config.json or {envname}_params.json)

{envname}_config.json in the

{
    "rollout_length": 1000,
    "strategy": "offline",
    "p_action": 0.0
}

Such that if your Env is called without arguments it reads it from the .json file.

e.g.
in the init function of the wrapper if exception is raised:
TypeError: f() missing 3 required positional arguments: 'a', 'b', and 'c'

The init function of the wrapper reads it from the json

with open(f"{envname}_config.json", "r") as f:
    env_data = json.load(f)

self.env = gym.make(env_name,**env_data)

This is just the idea, there should be many ways of implementing it.
I am just using the current path as the location of the {env_name}.json

[Proposal] Pretty print environment registry

Proposal

The only way of viewing all of the environments that can be created is with gymnasium.envs.registry.keys()
However, this is very ugly to see all environment ids.

There should be a way of adding a pretty print option that makes it easier all of the environments.
In addition, we should consider shortening the import statement to gymnasium.registry

[Bug] Allow Wrapper metadata to be different from environment metadata

The HumanRendering and RenderCollection wrappers enable additional render_modes which should be reflected in the metadata.
Currently, this is not true.

I was originally thinking about of a complex method of overwriting the dict implementation such that the keys are linked to the environment. However, as the metadata is meant to be static, then I think we can just deepcopy the lower env's metadata and add the wrappers additional metadata

[Proposal] Env.metadata["render_modes"] type should be set

Proposal

Allow metadata["render_modes"] to be a set.
Currently, there are two checks that forbid this. One during make, where metadata["render_modes"] is checked to be a Sequence (and a warn is shown otherwise) and the other in the PassiveEnvChecker

if not isinstance(render_modes, (list, tuple)):
where it is checked to be one between list or tuple and an error is thrown otherwise.
These checks should instead force metadata["render_modes"] to be an Iterable.

After allowing set in checks, we should make metadata["render_modes"] a set in our environments and in the examples in docs (people use them as blueprints for making new environments).

Motivation

Set is more appropriate for metadata["render_modes"] since order doesn't matter and repetitions don't make any sense.

Pitch

No response

Alternatives

Leave everything as it is.

Additional context

Needed changes can be seen here: main...younik:Gymnasium:render-modes-set

Checklist

  • I have checked that there is no similar issue in the repo

[Bug Report] Value Error: env.step(action) #3138

Describe the bug

When setting up my envs, I get Value Error messages for my env.step(env.action_space.sample())
If you can help that will be greatly appreciated! (Mac OS, Jupyter Lab)

Here's the Code:

Code example

import gym_super_mario_bros
import gymnasium as gym
from nes_py.wrappers import JoypadSpace
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT

env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, SIMPLE_MOVEMENT)

# Create a flag - restart or not
done = True
for step in range(100000):
    if done:
        env.reset()
        # do random actions
    state, reward, done, info = env.step(env.action_space.sample())
    # show the game on the screen
    env.render()
env.close()

System info

pip3 install gym-super-mario-bros==7.3.0 nes_py
gymnasium.version = 0.26.3
python==3.7.0

Additional context

Error Message:
ValueError Traceback (most recent call last)
Cell In [45], line 5
3 if done:
4 state = env.reset()
----> 5 state, reward, done, info = env.step(env.action_space.sample())
6 env.render()
8 env.close()

File /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/nes_py/wrappers/joypad_space.py:74, in JoypadSpace.step(self, action)
59 """
60 Take a step using the given action.
61
(...)
71
72 """
73 # take the step and record the output
---> 74 return self.env.step(self._action_map[action])

File /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/gym/wrappers/time_limit.py:50, in TimeLimit.step(self, action)
39 def step(self, action):
40 """Steps through the environment and if the number of steps elapsed exceeds max_episode_steps then truncate.
41
42 Args:
(...)
48
49 """
---> 50 observation, reward, terminated, truncated, info = self.env.step(action)
51 self._elapsed_steps += 1
53 if self._elapsed_steps >= self._max_episode_steps:

ValueError: not enough values to unpack (expected 5, got 4)

Checklist

  • I have checked that there is no similar issue in the repo

[Question] Why does env.step in the HalfCheetah environment return 5 values if one of them if always False?

Question

This is code directly copied from this repo for the halfcheetah environment v4, specifically this

def step(self, action):
        x_position_before = self.data.qpos[0]
        self.do_simulation(action, self.frame_skip)
        x_position_after = self.data.qpos[0]
        x_velocity = (x_position_after - x_position_before) / self.dt

        ctrl_cost = self.control_cost(action)

        forward_reward = self._forward_reward_weight * x_velocity

        observation = self._get_obs()
        reward = forward_reward - ctrl_cost
        terminated = False
        info = {
            "x_position": x_position_after,
            "x_velocity": x_velocity,
            "reward_run": forward_reward,
            "reward_ctrl": -ctrl_cost,
        }

        if self.render_mode == "human":
            self.render()
        return observation, reward, terminated, False, info

That functions returns observation, reward, terminated, and info, which all makes sense, but then there is that False that I don't understand. What's the purpose of that? What am I missing?

[Proposal] Fix pyright (add type hinting to the rest of the project)

Proposal

We added pyright to our pre-commit process a while ago and due to a number of options failing, we decided to turn off a number of the features. However, we found that this accidentally turned off the "GeneralTypeError" which allowed obviously type breaking code to be added.

Therefore, the proposal is to remove the bad options added in pyright which can be found in pyproject.toml

# reportUnknownMemberType = "warning"  # -> raises 6035 warnings
# reportUnknownParameterType = "warning"  # -> raises 1327 warnings
# reportUnknownVariableType = "warning"  # -> raises 2585 warnings
# reportUnknownArgumentType = "warning"  # -> raises 2104 warnings
reportGeneralTypeIssues = "none"  # -> commented out raises 489 errors
reportUntypedFunctionDecorator = "none"  # -> pytest.mark.parameterize issues

This process can be done partially, fixing folders at a time using the ignore files in pyright to only focus on 1 section at a time
However, we do not want to add # type: ignore everywhere over the project if we can help it

If you want to add type hints for one section of the project, please add a comment or DM me on discord

[Proposal] Add python 3.11 support

Python 3.11 is meant to be released in Oct 2022 that provides a number of speedup, up to 30%.
As software is fairly low in the programming stack then we should look to support it soon

[Question] Using `make` with ALE/Atari envs throws "Namespace ALE not found."

Question

Hey everyone, awesome work on the new repos and gymnasium/gym (>=0.26) APIs! We are very excited to be enhancing our RLlib to support these very soon. The current PR is already in good shape (literally had to touch every single line of RLlib :D ). However, one remaining problem for testing this is that Atari envs don't seem to run on gymnasium on my Mac (and in our Linux CI). Here is my repro:

$ # clean conda env
$ pip install gymnasium[atari] gymnasium[accept-rom-license] ale_py autorom
$ autorom
>> Y

$ python
>>> import gymnasium as gym
>>> gym.make("ALE/Pong-v5")

Throws error:

Traceback (most recent call last):
  File "/Users/sven/opt/anaconda3/envs/ray/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3251, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-1-a6c3b954198a>", line 1, in <module>
    make("ALE/Pong-v5")
  File "/Users/sven/opt/anaconda3/envs/ray/lib/python3.8/site-packages/gymnasium/envs/registration.py", line 569, in make
    _check_version_exists(ns, name, version)
  File "/Users/sven/opt/anaconda3/envs/ray/lib/python3.8/site-packages/gymnasium/envs/registration.py", line 219, in _check_version_exists
    _check_name_exists(ns, name)
  File "/Users/sven/opt/anaconda3/envs/ray/lib/python3.8/site-packages/gymnasium/envs/registration.py", line 187, in _check_name_exists
    _check_namespace_exists(ns)
  File "/Users/sven/opt/anaconda3/envs/ray/lib/python3.8/site-packages/gymnasium/envs/registration.py", line 182, in _check_namespace_exists
    raise error.NamespaceNotFound(f"Namespace {ns} not found. {suggestion_msg}")
gymnasium.error.NamespaceNotFound: Namespace ALE not found. Have you installed the proper package for ALE?

Anything else I'm missing here?

The same procedure works 100% fine when using gym instead of gymnasium.

[Proposal] Remove python 3.6 and add Jax as a core dependancy

Python 3.6 is no longer being supported by the python foundation despite critical bugs
As a result, a number of project we rely on not longer provide python 3.6 releases: pytest, mujoco, jax

Therefore, I propose after the v26 release, we remove support for python 3.6.
At the same time, we can add jax as a core dependency for the project that will be used in the updated wrappers and envs

[Proposal] Add more content pages

Proposal

Below the introduction section on the website, we include several pages on critical topics that provide an explanation in a short and less example based than tutorials

I believe we need to have the following pages (Some of these exist already)

  1. Basic usage - A basic explanation of how to use Gymnasium
  2. Compatibility with OpenAI Gym - How to use Gym compatible environments
  3. v22 to v26 Migration Guide - A migration guide for users updating code from v22 to v26 necessary for Gymnasium
  4. Registering an environment - How to register / make an environment, including entry_points for environments
  5. Training agents for environments - How to train an agent for environments, with links to cleanrl, etc
  6. Speeding up environments - A discussion on how optimise environments, particularly through vectorisation
  7. Recording environments - How to record environments

[Proposal/Bug Fix] Change truncation to termination in Car Racing after finishing a lap

Proposal

Currently in Car Racing, when the agent finishes a lap, the environment is marked as truncated instead of terminated. This seems like a really odd choice to me.

if self.tile_visited_count == len(self.track) or self.new_lap:
# Truncation due to finishing lap
# This should not be treated as a failure
# but like a timeout
truncated = True

This was added in openai/gym#2890 alongside an actual fix to the environment logic. I suspect the review focused on the bug fix, and omitted the undiscussed change, so it slipped through the cracks. (BTW now you see why I'm always being annoying about out-of-scope changes in PRs and similar stuff)

Finishing a lap is a very clear example for episode termination. You reach a terminal state after making a full loop, and the episode ends. It should never have been marked as truncation.

The annoying part the environment version was bumped for this (but also for the actual bug), so we'll have to bump it again. But I can't really see any justification for keeping this marked as truncation, which is inconsistent with the entire rationale for what truncation is meant to be (reaching the time limit). The explanation in some of the comments was that finishing the lap shouldn't be treated as a failure, but termination does not imply failure. Failure or success is defined by the reward. Termination says "You're done, nothing more to do". Truncation says "You took too long, try again".

@pseudo-rnd-thoughts @jkterry1 @araffin

[Proposal] Improper documentation of Deprecated Methods in Gym Docs (Typo)

Proposal

I have been going through Gym documentation, learning the basics from the Docs. I did notice a minor inconvenience with the documentation on the Core page. As of gym version 0.26, the done return has been deprecated.

This is not being represented elegantly in the documentation. Documentation looks a bit clunky with this style of expressing deprecation.

image

This is due to the usage of .. autofunction:: gymnasium.Env.step in Gymnasium/docs/api/env.md

Sphinx's autofunction automatically pulls documentation from the doc-string. In the core.py file of Gym, the docstring is defined as such:
image

We can make the documentation a bit more clean by using the ..deprecated:: directive. It is clearly defined in their docs.

This is only a slight tardiness, still would appreciate it being formatted.

Motivation

No response

Pitch

No response

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Bug Report] Uncaught integer overflow on MultiDiscrete.flatten()

Describe the bug
When flattening a large MultiDiscrete space with a small size dtype an integer overflow occurs.

Code example

import gym
import numpy as np

space = gym.spaces.MultiDiscrete([101, 101, 101, 101], dtype=np.int8)
x = np.array([1, 1, 1, 1])

gym.spaces.flatten(space, x)
Traceback (most recent call last):
  File "C:\Users\user\AppData\Roaming\JetBrains\PyCharm2022.2\scratches\scratch.py", line 7, in <module>
    gym.spaces.flatten(space, x)
  File "C:\Users\user\Anaconda3\envs\TRG\lib\functools.py", line 888, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "C:\Users\user\Anaconda3\envs\TRG\lib\site-packages\gym\spaces\utils.py", line 90, in _flatten_multidiscrete
    onehot = np.zeros((offsets[-1],), dtype=space.dtype)
ValueError: negative dimensions are not allowed

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Bug Report] SyncVectorEnv type-hint bug

Describe the bug

In gymnasium/vector/sync_vector_env.py Line32, env_fns of type-hint is Iterator[Callable[[], Env]].
The type-hint should be Iterable.

Ref docs of mypy

  • Iterator needs to implement __next__ and __iter__ functions
  • Iterable only needs to implement the __iter__ function

env_fns is only used in gymnasium/vector/sync_vector_env.py Line 52 (self.envs = [env_fn() for env_fn in env_fns]). For this line of code, the Iterable type is sufficient.

Iterator is more restrictive than Iterable, causing mypy to throw an error when you pass it to list.

Code example

import gymnasium as gym

envs = gym.vector.SyncVectorEnv([lambda: gym.make("CartPole-v1") for _ in range(3)])

mypy error report:

❯ mypy a.py                           
a.py:3: error: Argument 1 to "SyncVectorEnv" has incompatible type "List[Callable[[], Env[Any, Any]]]"; expected "Iterator[Callable[[], Env[Any, Any]]]"  [arg-type]
a.py:3: note: "list" is missing following "Iterator" protocol member:
a.py:3: note:     __next__
Found 1 error in 1 file (checked 1 source file)

System info

  • pip install
  • gymnasium0.26.3
  • python3.9.13
  • mypy0.991
  • macOS13.0.1

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Proposal] Add transitional probabilities to Taxi and Cliff Walking toy text environments

Proposal

Only Frozen Lake in the toy text grid world environments implements transitional probabilities.

Taxi is supposed to have it based on the previous documentation but has never been implemented, always returning 1.0.

At the same time cliff walking could also be set up to use transitional probabilities using the same approach.

Motivation

Adding transitional probabilities to taxi will close out a TODO that has been on the list for a long time. It will also bring the environment in line with the source paper, The Fickle Taxi Task - Section 7.1 of Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition (https://www.jair.org/index.php/jair/article/view/10266/24463).

For cliff walking, it presents and opportunity to add depth to the environment and since it uses the same approach would not add significantly more time or risk.

Pitch

Taxi
Add transitional probability into taxi toy text environment:

  • leverage approach from frozen_lake to supply a transitional probability of 0.8 direction intended, 0.1 left and 0.1 right of intended direction for movement actions.
  • the paper proposes that, once the taxi has picked up the passenger and moved one square away from the passenger's source location, the passenger changes their destination location with probability 0.3.
  • for taxi transition probabilities for pick up and drop off actions remain 1.0.
  • add arguments to enable/disable features:
    - is_rainy = True | False to enable transitional probabilities on taxi movement, defaults to False.
    - fickle_passenger = True | False to enable the passenger to change their destination once picked up, defaults to False.

Cliff walking
Add transitional probability into cliff walking toy text environment by leverage approach from frozen_lake to supply a transitional probability of 0.3 direction intended, 0.3 left and 0.3 right of intended direction for movement actions.

  • add arguments to enable/disable transitional probabilities.
    - is_slippery = True | False to enable transitional probabilities on player movement, defaults to False.

For both:

  • Update unit tests.
  • Update documentation.
  • Increment versions in registry.

Alternatives

  1. Do nothing. Misses an opportunity to make the toy_text environments consistent and more useful for beginner RL practitioners.

  2. Remove transitional probability from taxi and/or cliff walking. In either case prob would be removed from the info returned. Removes the need to complete taxi work and will simplify any ongoing maintenance.

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Proposal] Remove `ActionWrapper.reverse_action`

The ActionWrapper has a function reverse_action which is not implemented by any of the actual gymnasium action wrappers.

This is the commit that adds reverse actions, it is 6 years old and I don't think has been used since. openai/gym@5afbb71

Therefore, I think is reasonable to remove it. However before making a PR, I want to make an issue to hear peoples ideas

[Bug Report] Making a MujocoPy Environment Causes a ModuleNotFoundError

Describe the bug

Making a MujocoPy environment causes a ModuleNotFoundError for "mujoco" due to this line.
The extra requirement for "mujoco_py" in setup.py does not include the mujoco library, so this looks like a bug.

Code example

import gymnasium
gymnasium.make('Hopper-v2')

System info

I've installed Gymansium using the following command:

pip install gymnasium[mujoco_py]

Gymnasium version: 0.26.3
Linux version: Ubuntu 22.04.1 LTS
Python 3.10.6

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Bug Report] Need to update `mujoco` version

Describe the bug

When trying to add dm_control, dependency resolution fails because mujoco version is pinned at

"mujoco": ["mujoco==2.2", "imageio>=2.14.1"],

poetry add -G dm_control Shimmy dm_control
Using version ^0.1.0 for shimmy
Using version ^1.0.8 for dm-control

Updating dependencies
Resolving dependencies... (0.9s)

Because dm-control (1.0.8) depends on mujoco (>=2.3.0)
 and no versions of dm-control match >1.0.8,<2.0.0, dm-control (>=1.0.8,<2.0.0) requires mujoco (>=2.3.0).
So, because cleanrl depends on both mujoco (2.2) and dm-control (^1.0.8), version solving failed.

Solution

Update the mujoco version in the setup.py

Code example

No response

System info

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Question] Multitple render modes

Question

Hello,

is it possible to enable multiple render modes at the same time? (apparently no, see below)

For instance, if you want to take a look at the env and record a video at the same time.

In gym 0.21 you could do:

# Retrieve the image
image = env.render(mode="rgb_array")
# Show the GUI
env.render(mode="human")

but now with gym 0.26+, render_mode passed to the constructor can only be a string, no?

Calling play() from gymnasium.utils.play will error if render_mode is not in ["rgb_array", "rgb_array_list]"

Describe the bug

I had made an assumption that from gymnasium.utils.play import play would support text rendering. Seems that was a bad assumption and that only pygame rendering is supported i.e. render_mode in ["rgb_array", "rgb_array_list]".

However, if you pass a mode not in that list e.g. render_mode="human", it is captured and sends a message to the logger. However, it the script doesn't exit and it crashes shortly after when it tries to obtain the video size with assert rendered is not None and isinstance(rendered, np.ndarray)

Code example

import gymnasium as gym
from gymnasium.utils.play import play

mapping = {"2": 1,  # Down.
           "4": 0,  # Left.
           "6": 2,  # Right.
           "8": 3,  # Up.
           }

play(gym.make('FrozenLake-v1', render_mode='human'), keys_to_action=mapping)

System info

Gymnasium installed with pip
gymnasium.version '0.26.3'
Python 3.10.6
Windows 10.

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Proposal] Add render return data to `env_render_passive_checker` and `check_env`

Proposal

Gymnasium has two ways of checking the implementation of environments is correct, the passive environment checker and the (invasive) environment checker.
These checkers currently check the rest of the environment accurately but not the rendering return data.

There are todo in the locations where this should be added,

# TODO: Check that the result is correct
and
# todo: recreate the environment with a different render_mode for check that each work

[Proposal] Tutorials

Proposal

To encourage the use of Gymnasium and build up the RL community, I would propose that a large range of tutorials are created.

This is a list of tutorials that could be made

  • Implementation of a custom environment
  • Frozenlake training results with different map sizes
  • Gymnasium vectorisation (gym.make_vec)
  • Training agents for the blackjack environment, it has Tuple observation space
  • DQN for atari implementation (doesn't need to be fast)
  • Train an agent with stable-baselines-3
  • Train an agent with tianshou
  • Training a deep RL agent with pytorch fromscratch
  • Training a deep RL agent with jax from scratch
  • How to use the action sample masking, with example from Taxi
  • Car racing, comparing agents with continuous and discrete action spaces
  • Exploring the impact of bipedal walker hardcore parameter on agent performance
  • Experimenting with classic control reset options random state bounds
  • Add environment or example using the Graph space

[Proposal] Migrate from `setup.py` to `pyproject.toml`

Proposal

I'd like to propose to migrate from using setup.py to using pyproject.toml.

Motivation

Probably not super urgent, but since pyproject.toml should be the new standard and setup.py is now considered legacy (see pip documentation), it could be good to migrate at some point.

Pitch

  • Migrate all the content of setup.py to pyproject.toml
  • Have a single source of truth for all the dependencies in pyproject.toml, thus remove requirements.txt and test_requirements.txt

Checklist

  • I have checked that there is no similar issue in the repo (required)

Roadmap

This is a loose roadmap of our plans for major changes to Gymnasium:

December:

  • Experimental new wrappers
  • Experimental functional API
  • Python 3.11 support

February / March:

  • Official Conda packaging
  • Add Experimental vector API
  • Add full testing for experimental wrappers
  • Add Experimental vector wrappers
  • Add initial support for Minari
  • Release v0.28.0

April:

  • Fix all bugs and update documentation for experimental features
  • Add functional versions of all gymnasium environments
  • Make initial release of Brax environments
  • Extensive envpool integration for creating vectorized environments
  • Release v0.29.0 as an intermediate version with experimental functional, new wrappers and vectors to root

May:

  • Release v0.30.0 with old wrapper and vector removed

June:

  • Move Box2D and MuJoCo environments to separate repos for reproducibility
  • 1.0 release

RecordVideo not recording any videos anymore in 0.26.x

Describe the bug

Hello,

since the change in 0.26.0 of the render_mode flag, the gym.wrappers.RecordVideo does not work anymore.

When recording the video we are getting the following log message:

.../lib/python3.8/site-packages/gym/wrappers/monitoring/video_recorder.py:59: UserWarning: WARN: Disabling video recorder because environment <OrderEnforcing<PassiveEnvChecker<AtariEnv<ALE/Pong-v5>>>> was not initialized with any compatible video mode between rgb_array and rgb_array_list

We are setting the render_mode when creating the environment, as showed in the blog for the 0.26.0 release, but the render_mode flag no longer gets set, leading to the warning and videos not being recorded.

Here is a code sample that can help with the issue replication:

Code example

import gym

env = gym.make("ALE/Pong-v5", render_mode="rgb_array")

env = gym.wrappers.RecordVideo(
  env,
  video_folder='video',
  step_trigger=lambda x: x % 100 == 0
)

env.reset()
for t in range(100):
  action = env.action_space.sample()
  observation, reward, terminated, truncated, info = env.step(action)

env.close()

System info

Gym 0.26.2, Python 3.8.10.

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Proposal] Wrapper rewrite

Proposal

Gymnasium already contains a large collection of wrappers, but we believe that the wrappers can be improved to

  1. Support arbitrarily complex observation / action spaces. As RL has advanced, action and observation spaces are becoming more complex and the current wrappers were not implemented with these spaces in mind.
  2. Support for numpy, jax and pytorch data. With hardware accelerated environments, i.e. Brax, written in Jax and similar pytorch based programs, numpy is not the only game in town anymore. Therefore, these upgrades will use Jumpy for calling numpy, jax and torch depending on the data.
  3. More wrappers. Projects like Supersuit aimed to bring more wrappers for RL however wrappers can be moved into Gymnasium.
  4. Versioning. Like environments, the implementation details of wrapper can cause changes agent performance. Therefore, we propose adding version numbers with all wrappers.
  5. In v28, we aim to rewrite the VectorEnv to not inherit from Env, as a result new vectorised versions of the wrappers will be provided.

Motivation

No response

Pitch

Lambda Observation Wrappers - wrappers.lambda_observation

Old name New name func tree struct vector version
TransformObservation LambdaObservation - VectorLambdaObservation
FilterObservation FilterObservation y vectorise
FlattenObservation FlattenObservation x vectorise
GrayScaleObservation GrayscaleObservation y vectorise
PixelObservationWrapper PixelObservation x vectorise
ResizeObservation ResizeObservation y vectorise
- ReshapeObservation y vectorise
- RescaleObservation y vectorise
- DTypeObservation y vectorise
NormalizeObservation NormalizeObservation x VectorNormalizeObservation
TimeAwareObservation TimeAwareObservation - VectorTimeAwareObservation
FrameStack FrameStackObservation - VectorFrameStack
- DelayObservation - VectorDelayObservation
AtariProcessing AtariPreprocessing - -

Lambda Action Wrappers - wrappers.lambda_action

Old name New name func tree structure vector version
- LambdaAction - VectorLambdaAction
ClipAction ClipAction y vectorise
RescaleAction RescaleAction y vectorise
- NanAction y vectorise
- StickyAction - VectorStickAction

Lambda Reward Wrappers - wrappers.lambda_reward

Old name New name Vector version
TransformReward LambdaReward VectorLambdaReward
ClipReward ClipReward vectorise
- RescaleReward vectorise
NormalizeReward NormalizeReward VectorNormalizeReward

Common Wrappers - wrappers.common

Old name new name Vector version
AutoResetWrapper AutoReset -
PassiveEnvChecker PassiveEnvChecker -
OrderEnforcing OrderEnforcing vectorise
EnvCompatibility remove for shimmy -
RecordEpisodeStatistics RecordEpisodeStatistics VectorRecordEpisodeStatistics
RecordVideo RecordVideo VectorRecordVideo
RenderCollection RenderCollection VectorRenderCollection
HumanRendering HumanRendering -
- JaxToNumpy -
- JaxToTorch -

Vector Only Wrappers - vector.wrappers.common

Old name New name
VectorListInfo VectorListInfo

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Question] Flatten Discrete box potentially problematic

Question

Flattening Discrete space to Box space may be problematic. The flatten wrapper converts Discrete to Box as a one-hot encoding. Suppose the original space is Discrete(3), then:

0 maps to [1, 0, 0]
1 maps to [0, 1, 0]
3 maps to [0, 0, 1]

When we sample the action space for random actions, it samples the Box, which can produce any of the eight combination of 0s and 1s in a three-element array, namely:

[0, 0, 0],
[0, 0, 1], *
[0, 1, 0], *
[0, 1, 1],
[1, 0, 0], *
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]

Only three of these eight that I’ve starred are useable in the strict sense of the mapping. The unflatten function for a Discrete space uses np.nonzero(x)[0][0], and here’s at table of what the above arrays map to:

+ ------------------ + ---------------- + --------------------------------------------- +
| In Flattened Space | np.nonzero(x)[0] | np.nonzero(x)[0][0] (aka discrete equivalent) |
+ ------------------ + ---------------- + --------------------------------------------- +
| 0, 0, 0            | Error            | Error                                         |
| 0, 0, 1            | [2]              | 2                                             |
| 0, 1, 0            | [1]              | 1                                             |
| 0, 1, 1            | [1, 2]           | 1                                             |
| 1, 0, 0            | [0]              | 0                                             |
| 1, 0, 1            | [0, 2]           | 0                                             |
| 1, 1, 0            | [0, 1]           | 0                                             |
| 1, 1, 1            | [0, 1, 2]        | 0                                             |
+ ------------------ + ---------------- + --------------------------------------------- +

Implications

Obviously, [0, 0, 0] will fail because there is no nonzero.
Importantly, only one eighth of the random samples will map to 2. One fourth will map to 1, and one half will map to 0. This has some important implications on exploration, especially if action 2 is the “correct action” throughout much of the simulation. I'm very curious why I have not seen this come up before. This type of skewing in the random sampling can have major implications in the way the algorithm explores and learns, and the problem is exacerbated when Discrete(n), n is large. Am I missing something here?

Solution

This is unique to Discrete spaces. Instead of mapping to a one-hot encoding we could just map to a box of a single element with the appropriate range. Discrete(n) maps to Box(0, n-1, (1,), int) instead of Box(0, 1, (n,), int).

[Proposal] Remove autoreset logic from VectorEnv and rely on AutoResetWrapper instead

Proposal

At the moment, both SyncVectorEnv and AsyncVectorEnv have the autoreset logic hardcoded in their step function and can't be disabled. This means I cannot leverage VectorEnv to perform parallel evaluations. I don't want the environments to autoreset, since I want to rollout a single episode for each sub environment.

In TextWorld, I opted to simply carry over the last state until all sub envs terminate. The same can be done with a simple wrapper (e.g., IgnoreDoneEnv below) if autoreset logic is moved outside *VectorEnv and the appropriate AutoResetWrapper is used instead.

class IgnoreDoneEnv(gym.Wrapper):

    def reset(self, **kwargs):
        observation, info = self.env.reset(**kwargs)
        self._last_state = None
        self._is_done = False
        return observation, info

    def step(self, action):
        if self._is_done:
            return self._last_state

        observation, reward, terminated, truncated, info = self.env.step(action)
        self._is_done = terminated or truncated

        self._last_state = (observation, reward, terminated, truncated, info)
        return observation, reward, terminated, truncated, info

Motivation

No response

Pitch

No response

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Bug Report] If render mode is not passed before `env.render`, the code derps out instead of being elegantly handled

See openai/gym#3108

When trying to render the Taxi environment without having passed a render_mode, there's a rather weird error message that "This should never happen". It, indeed, should never happen, and we should catch this issue earlier with a descriptive error message.

I'm not sure how general the issue is, and how generally we should tackle it - but some general safeguard of "You can't call env.render() without setting render_mode on make" might be appropriate

I'll take a look at it in more detail in a bit

[Proposal] Add strict type hinting for core.py and spaces

Gymnasium inherits some of the type hinting that we completed for gym
However, having strict type hinting at the critical parts of the project could be an important improve

Known issues:

  • Pyright raises an issue for overwriting a variable with a property. This is an issue for variables like metadata and spec which Wrapper overwrites with a property. We could ignore this issue or change metadata and spec to be specs in Env with a hidden _metadata and _spec

[Proposal] Use GitHub Issue Forms

Proposal

I propose to use GitHub Issue Forms when an issue is created in this repo

Motivation

This facilitates the proper filling in of information when an issue is created. Information can be marked as required.

Alternatives

Stick with the current solution

Additional context

Creating an issue could look like this (screenshot taken from stable-baselines3 repo:

image

Checklist

  • I have checked that there is no similar issue in the repo (required)

[Bug Report] The last 84 dimensions of Humanoid-v4 are always zero.

Describe the bug

The documentation of Humanoid-v4 says it no longer has the contact force issue. But I still find that the last 84 dimensions of states in Humanoid-v4 are always zero.
截屏2022-11-06 17 16 15

Code example

import mujoco
import gymnasium
env = gymnasium.make("Humanoid-v4")
s, _ = env.reset()
for i in range(3):
    action = env.action_space.sample()
    next_state, reward, done, truncated, _ = env.step(action)
print(next_state[-84:])

System info

gymnasium_ver: 0.26.3
mujoco_ver 2.2.0
Ubuntu

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Question] Best way to use Gymnasium with another prorgramming language.

Question

I want to test an algorithm written in another language that Python. I want to ask you what is the best way to interact with Gymnasium from another language currently?
I know that back in a day, gym had a REST API I wonder if you have something similar or a foreign function interface.

[Bug Report] AtariPreprocessing Wrapper is unavailable

Describe the bug

AtariPreprocessing Wrapper does not seem to be available. Probably because of the lack of interfaces, no good solution has been thought of yet.

Code example

import gymnasium as gym

env = gym.make("GymV26Environment-v0", env_id="ALE/Breakout-v5")
env = gym.wrappers.AtariPreprocessing(env, frame_skip=1)

Output:

❯ python a.py
A.L.E: Arcade Learning Environment (version 0.8.0+919230b)
[Powered by Stella]
Traceback (most recent call last):
  File "/Users/zhaoyanxiao/Dev/abcdrl/a.py", line 4, in <module>
    env = gym.wrappers.AtariPreprocessing(env, frame_skip=1)
  File "/Users/zhaoyanxiao/opt/anaconda3/envs/abcdrl/lib/python3.9/site-packages/gymnasium/wrappers/atari_preprocessing.py", line 80, in __init__
    assert env.unwrapped.get_action_meanings()[0] == "NOOP"
AttributeError: 'GymEnvironment' object has no attribute 'get_action_meanings'

System info

pip install
gymnasium0.26.3
python3.9.13
mypy0.991
macOS13.0.1

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Proposal] Add wrapper checker

In v24 / 25, we added an environment checker for users to check that environment follow the API correctly.
As some environments implement their own wrappers and to check our own, it would be helpful to add a wrapper checker.

Proposed checks

  • Check that all observations and actions spaces are valid (all observations are contained in space)
  • Check the input and output types for reset and step

[Bug Report] Cannot make an environment in env.registry

Describe the bug

Hello,

When trying to make an environment in gym.registry I get a NameNotFound error, even though the environment should be found as I am picking the name from gym.registry.

Python 3.10.6 (main, Nov  2 2022, 18:53:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import gymnasium as gym
>>> gym.make("YarsRevengeNoFrameskip-v4")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/marc/.venvs/bonsai-gym/lib/python3.10/site-packages/gymnasium/envs/registration.py", line 569, in make
    _check_version_exists(ns, name, version)
  File "/home/marc/.venvs/bonsai-gym/lib/python3.10/site-packages/gymnasium/envs/registration.py", line 219, in _check_version_exists
    _check_name_exists(ns, name)
  File "/home/marc/.venvs/bonsai-gym/lib/python3.10/site-packages/gymnasium/envs/registration.py", line 197, in _check_name_exists
    raise error.NameNotFound(
gymnasium.error.NameNotFound: Environment YarsRevengeNoFrameskip doesn't exist.

As you can see:

>>> from gym import envs
>>> envs.registry['YarsRevengeNoFrameskip-v4']
EnvSpec(id='YarsRevengeNoFrameskip-v4', entry_point='ale_py.env.gym:AtariEnv', reward_threshold=None, nondeterministic=False, max_episode_steps=None, order_enforce=True, autoreset=False, disable_env_checker=False, apply_api_compatibility=False, kwargs={'game': 'yars_revenge', 'obs_type': 'rgb', 'repeat_action_probability': 0.0, 'full_action_space': False, 'max_num_frames_per_episode': 108000, 'frameskip': 1}, namespace=None, name='YarsRevengeNoFrameskip', version=4)

Code example

No response

System info

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Proposal] Documentation Frozen Lake

Proposal

Get the is_slippery parameter infos more visible in the documentation

Motivation

When running Frozen Lake for the fisrt time, is_slippery add a default parameter set at true.
It's not explained in the documentation but only in the repository Github.
Can induce some difficulties when learning.
Also the info about this boolean are not easily readable on the documentation page.

Pitch

Get a bigger policy or a title to separate the paragrah to the other ones.
Precise in the description that this parameter is set at true by default.

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

Vector API - design considerations

I started trying to implement a new, more sane vector API. And then I quickly realized that it is, indeed, as messy as I could have expected, so the code will have to wait for some time.

Here I want to dump my thoughts about how this whole thing should/could look so that we have a discussion going.

Main desired outcome: we can use the vector API to easily create envs vectorized through either simple vectorization, or jax vmapping (or any other fast mechanism). This can give us huge performance improvements for some envs without relying on additional external libraries. For other envs, we default to Sync/Async/EnvPool?

Current situation: vectorization is only possible via Sync/Async, which is slow af, but very general. EnvPool (not officially supported) only works with some envs, but is faster. Other existing options are generally similar to Sync/Async, with their own quirks (e.g. ray in rllib, or the custom implementation in SB3)

The main complication is wrappers. If an environment provides its own optimized vectorized version, then we can't apply single-env wrappers to it. A nice solution would be an automatic conversion from a Wrapper to a VectorWrapper, but that seems either very tricky or impossible to do in a general case. Fortunately, many actual wrappers don't need that "general case" treatment.

The hope I see for this is switching to lambda wrappers, at least for some of the existing wrappers. ActionWrappers, ObservationWrappers and RewardWrappers can in principle be stateful, which requires some workarounds to map them over vectorized envs. With lambda wrappers, we can literally just do a map.

An element that I think will be crucial is different levels of optimization - existing third-party environments and wrappers should work exactly the same way, with the clunky subprocess vecenv approach, unless they do a few extra things to opt-in for the improvements.

Another rough edge might be autoreset. Currently this concept is barely present in gym, it's an optional wrapper for single envs, and in that scope it works fine. In a vectorized case, it's more important and a bit more complicated. If we don't have some sort of autoreset by default in vector envs, that makes them borderline useless for many envs (consider cartpole where the first env instance happens to take 10 steps, and the second takes 100 steps - if we only reset after both are terminated, we just lost 45% of the data)

While a vectorized autoreset is trivial with a subprocess-like vector env, that's not the case with e.g. numpy/jax acceleration. While I can see some hacks that maybe would kinda work to add it in some of these cases via wrapper, we might just have to add a requirement that the environment handles autoreset itself. Note that this wouldn't be a breaking change in env design - envs that don't have built-in autoreset can still use the standard vectorization. But if you want to use vectorized wrappers and the more efficient vectorization paradigm, you need to add it.

Finally, a question is - how much can we break? I'm not aware of any significant usage of gym.vector, though I know it is used at least sometimes. Ideally I'd like to keep the outside API as similar as possible, perhaps even exactly the same (with additional capabilities). But can we change some of the internal semantics that are in principle exposed to the public, but are also just one of the few remaining relics of the past? As I recall, we want to do the vector revamp before 1.0, which is good, because after 1.0 we have to be very careful about breaking stuff.


Below I'm including a braindump of my semi-structured thoughts on this, just to have it recorded here with some additional details (most of this was mentioned above):

  1. Each environment can implement its own VectorEnv, or use built-in Sync/Async
    1. If implements its own, we can’t use individual wrappers - there’s no instance of gym.Env we can actually apply them to
    2. If uses built-in, then the VectorEnv contains several instances of gym.Env, to which individual wrappers are applied
  2. Each (?) wrapper should have single and vector mode - need to convert single to multi
    1. Should be trivial for:
      1. Observation wrapper - map observation
      2. Reward wrapper - map reward
      3. Action wrapper - map action
      4. EDIT - it's actually not trivial, needs lambda wrappers
    2. Need to have selectable/automatic optimization
      1. Jax envs/wrappers → vmap
      2. Pure numpy → nothing? or np vfunc?
      3. Generic → np.array(map) or np.array([... for o in obs])
      4. Settable in the wrapper? self.optimization: Literal["numpy"] | Literal["jax"] | None
    3. Some wrappers can’t be vectorized
      1. Atari preprocessing - needs to reset envs asynchronously
      2. Autoreset in general?
        1. We can require optimized envs to autoreset internally. Third-party envs will default to the regular vectorization, and they can opt-in for this

Issues in the meantime:

  1. OrderEnforcing (and others?) accept arguments in render
  2. Atari wrapper
  3. Several typing errors in vector API

Questions:

  1. Can we break the whole vector API? Does anyone use it?
    1. SB3 and rllib def have their own
  2. (check myself) do we want vector API before 1.0?

[Proposal] Clean up the `gym.make` function

Proposal

Clean up registration.py HumanRendering/RenderCollection automatic wrapping

if mode is not None and hasattr(env_creator, "metadata"):
assert isinstance(
env_creator.metadata, dict
), f"Expect the environment creator ({env_creator}) metadata to be dict, actual type: {type(env_creator.metadata)}"
if "render_modes" in env_creator.metadata:
render_modes = env_creator.metadata["render_modes"]
if not isinstance(render_modes, Sequence):
logger.warn(
f"Expects the environment metadata render_modes to be a Sequence (tuple or list), actual type: {type(render_modes)}"
)
# Apply the `HumanRendering` wrapper, if the mode=="human" but "human" not in render_modes
if (
mode == "human"
and "human" not in render_modes
and ("rgb_array" in render_modes or "rgb_array_list" in render_modes)
):
logger.warn(
"You are trying to use 'human' rendering for an environment that doesn't natively support it. "
"The HumanRendering wrapper is being applied to your environment."
)
apply_human_rendering = True
if "rgb_array" in render_modes:
_kwargs["render_mode"] = "rgb_array"
else:
_kwargs["render_mode"] = "rgb_array_list"
elif (
mode not in render_modes
and mode.endswith("_list")
and mode[: -len("_list")] in render_modes
):
_kwargs["render_mode"] = mode[: -len("_list")]
apply_render_collection = True
elif mode not in render_modes:
logger.warn(
f"The environment is being initialised with mode ({mode}) that is not in the possible render_modes ({render_modes})."
)
else:
logger.warn(
f"The environment creator metadata doesn't include `render_modes`, contains: {list(env_creator.metadata.keys())}"
)

This chunk has 5 levels of if-else indentation, with pretty opaque logic and seems to conflate two wrappers together. I don't have the bandwidth to do it right now, but it desperately needs a cleanup.

Motivation

No response

Pitch

No response

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo

[Bug Report] ValueError: not enough values to unpack (expected 5, got 4)

When setting up my envs, I get Value Error messages for my env.step(env.action_space.sample())
If you can help that will be greatly appreciated! (Mac OS, Jupyter Lab)

Here's the Code:

!pip3 install gym-super-mario-bros nes_py
!pip3 install gym

from nes_py.wrappers import JoypadSpace
import gym_super_mario_bros
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, SIMPLE_MOVEMENT)

done = True
for step in range(5000):
if done:
state = env.reset()
state, reward, done, info = env.step(env.action_space.sample())
env.render()

env.close()

Error Message:
ValueError Traceback (most recent call last)
Cell In [45], line 5
3 if done:
4 state = env.reset()
----> 5 state, reward, done, info = env.step(env.action_space.sample())
6 env.render()
8 env.close()

File /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/nes_py/wrappers/joypad_space.py:74, in JoypadSpace.step(self, action)
59 """
60 Take a step using the given action.
61
(...)
71
72 """
73 # take the step and record the output
---> 74 return self.env.step(self._action_map[action])

File /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/gym/wrappers/time_limit.py:50, in TimeLimit.step(self, action)
39 def step(self, action):
40 """Steps through the environment and if the number of steps elapsed exceeds max_episode_steps then truncate.
41
42 Args:
(...)
48
49 """
---> 50 observation, reward, terminated, truncated, info = self.env.step(action)
51 self._elapsed_steps += 1
53 if self._elapsed_steps >= self._max_episode_steps:

ValueError: not enough values to unpack (expected 5, got 4)

  • I have checked that there is no similar issue in the repo (required)

[Bug] Vector utils space functions updated for new spaces

gymnasium.vector.utils contains a number of speciality functions for vectorising spaces.
However, due to their location not being in spaces, we have ignored them when adding the recent spaces

Therefore, we need to fix these functions and or move these functions to gymnasium.spaces.utils such that they are updated as necessary rather than being forgotten. It should be noted that because of the new #32 vector API these functions might be removed, changes or added to in the near future

error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/

Question

Please help me look at this problem. It has been bothering me for several days.when I run the code "pip install gym_super_mario_bros==7.3.0 nes_py",I got this:

error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/

I've tried all the solutions on stackoverflow. I have installed the build tools and Visual Studio. I even tried reboot my computer and upgrade pip.But it doesn"t work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.