Giter VIP home page Giter VIP logo

farama-foundation / pettingzoo Goto Github PK

View Code? Open in Web Editor NEW
2.5K 20.0 400.0 176.08 MB

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

Home Page: https://pettingzoo.farama.org

License: Other

Python 99.89% Makefile 0.11%
api gym gymnasium multi-agent-reinforcement-learning reinforcement-learning multiagent-reinforcement-learning

pettingzoo's Introduction

pre-commit Code style: black

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gymnasium.

The documentation website is at pettingzoo.farama.org and we have a public discord server (which we also use to coordinate development work) that you can join here: https://discord.gg/nhvKkYa6qX

Environments

PettingZoo includes the following families of environments:

Installation

To install the base PettingZoo library: pip install pettingzoo.

This does not include dependencies for all families of environments (some environments can be problematic to install on certain systems).

To install the dependencies for one family, use pip install 'pettingzoo[atari]', or use pip install 'pettingzoo[all]' to install all dependencies.

We support Python 3.8, 3.9, 3.10 and 3.11 on Linux and macOS. We will accept PRs related to Windows, but do not officially support it.

Note: Some Linux distributions may require manual installation of cmake, swig, or zlib1g-dev (e.g., sudo apt install cmake swig zlib1g-dev)

Getting started

For an introduction to PettingZoo, see Basic Usage. To create a new environment, see our Environment Creation Tutorial and Custom Environment Examples. For examples of training RL models using PettingZoo see our tutorials:

API

PettingZoo model environments as Agent Environment Cycle (AEC) games, in order to be able to cleanly support all types of multi-agent RL environments under one API and to minimize the potential for certain classes of common bugs.

Using environments in PettingZoo is very similar to Gymnasium, i.e. you initialize an environment via:

from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()

Environments can be interacted with in a manner very similar to Gymnasium:

env.reset()
for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()
    action = None if termination or truncation else env.action_space(agent).sample()  # this is where you would insert your policy
    env.step(action)

For the complete API documentation, please see https://pettingzoo.farama.org/api/aec/

Parallel API

In certain environments, it's a valid to assume that agents take their actions at the same time. For these games, we offer a secondary API to allow for parallel actions, documented at https://pettingzoo.farama.org/api/parallel/

SuperSuit

SuperSuit is a library that includes all commonly used wrappers in RL (frame stacking, observation, normalization, etc.) for PettingZoo and Gymnasium environments with a nice API. We developed it in lieu of wrappers built into PettingZoo. https://github.com/Farama-Foundation/SuperSuit

Environment Versioning

PettingZoo keeps strict versioning for reproducibility reasons. All environments end in a suffix like "_v0". When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.

Project Maintainers

Project Manager: Elliot Tower

Maintenance for this project is also contributed by the broader Farama team: farama.org/team.

Citation

To cite this project in publication, please use

@article{terry2021pettingzoo,
  title={Pettingzoo: Gym for multi-agent reinforcement learning},
  author={Terry, J and Black, Benjamin and Grammel, Nathaniel and Jayakumar, Mario and Hari, Ananth and Sullivan, Ryan and Santos, Luis S and Dieffendahl, Clemens and Horsch, Caroline and Perez-Vicente, Rodrigo and others},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={15032--15043},
  year={2021}
}

pettingzoo's People

Contributors

aditya-pola avatar ananthhari avatar benblack769 avatar bolundai0216 avatar chorsch avatar dm-ackerman avatar dsctt avatar eczy avatar elliottower avatar erikl97 avatar jjshoots avatar jkterry1 avatar kir0ul avatar kyle-sang avatar lssr avatar mariojayakumar avatar mgoulao avatar niallw avatar paarasbhandari avatar praveenravi77 avatar qiyaowei avatar redtachyon avatar rodrigodelazcano avatar rohan138 avatar rushivarora avatar ryan-amaral avatar ryannavillus avatar tianchenliu avatar trigaten avatar willdudley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pettingzoo's Issues

Documentation link

Hello,
Would it be possible to add a link to the docs (maybe with readthedocs) ? :)
Thanks !

Backgammon, dou dizhu, maybe uno? not deterministic between runs

I wrote a simple regression test for the latest API changes, and discovered that Backgammon, dou dizhu, and maybe uno are not deterministic between runs. However, they do pass the seed test, so they are not dependent on the random or np.random state. Instead, they likely depend on the hash function's random state.

Basically, python's hash function is initialized at startup with a random value that it mixes into the result for the duration of the process. This is to prevent adversarial attacks that would result in large numbers of hash collisions. However, it also means that iteration order on a hash table is non-deterministic, so any program that relies on the iteration order of a hash table will not be suitable for perfect reproducibility as we wish.

To run the tests by where I discovered this issue, you can download the script https://gist.github.com/weepingwillowben/1ca446b054a6ff4245da32c5f09d1666 and run

python pettingzoo_determinism_test.py write
python pettingzoo_determinism_test.py read
diff out_hashes.json new_out_hashes.json

And you should get some output that looks like this:

31c31
<   "classic/backgammon": "46ca3c918d95483ee7a165f745b23b54",
---
>   "classic/backgammon": "596c97cd83567f3375da9c9976f88f78",
35c35
<   "classic/dou_dizhu": "18eaa9b7ef04a2890077276286574500",
---
>   "classic/dou_dizhu": "1db1fdb74dea200ed0f2676b27e03607",

Error with Tiger-Deer Env: Selecting Invalid Agent.

Hi,

I've been running the MAgent Tiger-Deer environment with 2 different algorithms: a RandomLearner and rllib's PPO. I'm also currently using rllib's PettingZooEnv. It seems both of the algorithms work for some number of iterations, but then error-out in this line https://github.com/ray-project/ray/blob/master/rllib/env/pettingzoo_env.py#L161.

The issue is that the agent being selected, deer_92, is not in the action_dict. I checked the self.aec_env.dones dict, however, and the agent is there. I added a snippet of the output, below. I printed out relevant info (shown after each == start step ==) when entering the step() function. Furthermore, it also appears all steps prior to this error only select deer_0 as the agent. I've re-ran the experiment several times and it always has the same result (e.g., deer_0 is always chosen and then it errors-out once any other agent is chosen).

I'm not sure if this is an issue with rllib, the Tiger-Deer env, or my own config.

�[2m�[36m(pid=34152)�[0m =============== start step =====================
�[2m�[36m(pid=34152)�[0m self.aec_env.agent_selection --> deer_0
�[2m�[36m(pid=34152)�[0m stepped_agents --> set()
�[2m�[36m(pid=34152)�[0m list(action_dict) --> ['deer_0', 'deer_1', 'deer_2', 'deer_3', 'deer_4', 'deer_5', 'deer_6', 'deer_7', 'deer_8', 'deer_9', 'deer_10', 'deer_11', 'deer_12', 'deer_13', 'deer_14', 'deer_15', 'deer_16', 'deer_17', 'deer_18', 'deer_19', 'deer_20', 'deer_21', 'deer_22', 'deer_23', 'deer_24', 'deer_25', 'deer_26', 'deer_27', 'deer_28', 'deer_29', 'deer_30', 'deer_31', 'deer_32', 'deer_33', 'deer_34', 'deer_35', 'deer_36', 'deer_37', 'deer_38', 'deer_39', 'deer_40', 'deer_41', 'deer_42', 'deer_43', 'deer_44', 'deer_45', 'deer_46', 'deer_47', 'deer_48', 'deer_49', 'deer_50', 'deer_51', 'deer_52', 'deer_53', 'deer_54', 'deer_55', 'deer_56', 'deer_57', 'deer_58', 'deer_59', 'deer_60', 'deer_61', 'deer_62', 'deer_63', 'deer_64', 'deer_65', 'deer_66', 'deer_67', 'deer_68', 'deer_69', 'deer_70', 'deer_71', 'deer_72', 'deer_73', 'deer_74', 'deer_75', 'deer_76', 'deer_77', 'deer_78', 'deer_79', 'deer_80', 'deer_81', 'deer_82', 'deer_83', 'deer_84', 'deer_85', 'deer_86', 'deer_87', 'deer_88', 'deer_89', 'deer_90', 'deer_91', 'deer_92', 'deer_93', 'deer_94', 'deer_95', 'deer_96', 'deer_97', 'deer_98', 'deer_99', 'deer_100', 'tiger_0', 'tiger_1', 'tiger_2', 'tiger_3', 'tiger_4', 'tiger_5', 'tiger_6', 'tiger_7', 'tiger_8', 'tiger_9', 'tiger_10', 'tiger_11', 'tiger_12', 'tiger_13', 'tiger_14', 'tiger_15', 'tiger_16', 'tiger_17', 'tiger_18', 'tiger_19']
�[2m�[36m(pid=34152)�[0m agent in action_dict -->  True
�[2m�[36m(pid=34152)�[0m agent in self.aec_env.dones --> False
�[2m�[36m(pid=34152)�[0m =============== start step =====================
�[2m�[36m(pid=34152)�[0m self.aec_env.agent_selection --> deer_0
�[2m�[36m(pid=34152)�[0m stepped_agents --> set()
�[2m�[36m(pid=34152)�[0m list(action_dict) --> ['deer_0', 'deer_1', 'deer_2', 'deer_3', 'deer_4', 'deer_5', 'deer_6', 'deer_7', 'deer_8', 'deer_9', 'deer_10', 'deer_11', 'deer_12', 'deer_13', 'deer_14', 'deer_15', 'deer_16', 'deer_17', 'deer_18', 'deer_19', 'deer_20', 'deer_21', 'deer_22', 'deer_23', 'deer_24', 'deer_25', 'deer_26', 'deer_27', 'deer_28', 'deer_29', 'deer_30', 'deer_31', 'deer_32', 'deer_33', 'deer_34', 'deer_35', 'deer_36', 'deer_37', 'deer_38', 'deer_39', 'deer_40', 'deer_41', 'deer_42', 'deer_43', 'deer_44', 'deer_45', 'deer_46', 'deer_47', 'deer_48', 'deer_49', 'deer_50', 'deer_51', 'deer_52', 'deer_53', 'deer_54', 'deer_55', 'deer_56', 'deer_57', 'deer_58', 'deer_59', 'deer_60', 'deer_61', 'deer_62', 'deer_63', 'deer_64', 'deer_65', 'deer_66', 'deer_67', 'deer_68', 'deer_69', 'deer_70', 'deer_71', 'deer_72', 'deer_73', 'deer_74', 'deer_75', 'deer_76', 'deer_77', 'deer_78', 'deer_79', 'deer_80', 'deer_81', 'deer_82', 'deer_83', 'deer_84', 'deer_85', 'deer_86', 'deer_87', 'deer_88', 'deer_89', 'deer_90', 'deer_91', 'deer_92', 'deer_93', 'deer_94', 'deer_95', 'deer_96', 'deer_97', 'deer_98', 'deer_99', 'deer_100', 'tiger_0', 'tiger_1', 'tiger_2', 'tiger_3', 'tiger_4', 'tiger_5', 'tiger_6', 'tiger_7', 'tiger_8', 'tiger_9', 'tiger_10', 'tiger_11', 'tiger_12', 'tiger_13', 'tiger_14', 'tiger_15', 'tiger_16', 'tiger_17', 'tiger_18', 'tiger_19']
�[2m�[36m(pid=34152)�[0m agent in action_dict -->  True
�[2m�[36m(pid=34152)�[0m agent in self.aec_env.dones -->  False
�[2m�[36m(pid=34152)�[0m =============== start step =====================
�[2m�[36m(pid=34152)�[0m self.aec_env.agent_selection --> deer_92
�[2m�[36m(pid=34152)�[0m stepped_agents --> set()
�[2m�[36m(pid=34152)�[0m list(action_dict) --> ['deer_0', 'deer_1', 'deer_2', 'deer_3', 'deer_4', 'deer_5', 'deer_6', 'deer_7', 'deer_8', 'deer_9', 'deer_10', 'deer_11', 'deer_12', 'deer_13', 'deer_14', 'deer_15', 'deer_16', 'deer_17', 'deer_18', 'deer_19', 'deer_20', 'deer_21', 'deer_22', 'deer_23', 'deer_24', 'deer_25', 'deer_26', 'deer_27', 'deer_28', 'deer_29', 'deer_30', 'deer_31', 'deer_32', 'deer_33', 'deer_34', 'deer_35', 'deer_36', 'deer_37', 'deer_38', 'deer_39', 'deer_40', 'deer_41', 'deer_42', 'deer_43', 'deer_44', 'deer_45', 'deer_46', 'deer_47', 'deer_48', 'deer_49', 'deer_50', 'deer_51', 'deer_52', 'deer_53', 'deer_54', 'deer_55', 'deer_56', 'deer_57', 'deer_58', 'deer_59', 'deer_60', 'deer_61', 'deer_62', 'deer_63', 'deer_64', 'deer_65', 'deer_66', 'deer_67', 'deer_68', 'deer_69', 'deer_70', 'deer_71', 'deer_72', 'deer_73', 'deer_74', 'deer_75', 'deer_76', 'deer_77', 'deer_78', 'deer_79', 'deer_80', 'deer_81', 'deer_82', 'deer_83', 'deer_84', 'deer_85', 'deer_86', 'deer_87', 'deer_88', 'deer_89', 'deer_90', 'deer_91', 'deer_93', 'deer_94', 'deer_95', 'deer_96', 'deer_97', 'deer_98', 'deer_99', 'deer_100', 'tiger_0', 'tiger_1', 'tiger_2', 'tiger_3', 'tiger_4', 'tiger_5', 'tiger_6', 'tiger_7', 'tiger_8', 'tiger_9', 'tiger_10', 'tiger_11', 'tiger_12', 'tiger_13', 'tiger_14', 'tiger_15', 'tiger_16', 'tiger_17', 'tiger_18', 'tiger_19']
�[2m�[36m(pid=34152)�[0m agent in action_dict -->  False
�[2m�[36m(pid=34152)�[0m agent in self.aec_env.dones -->  True
== Status ==
Memory usage on this node: 24.8/377.6 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/80 CPUs, 0/2 GPUs, 0.0/252.88 GiB heap, 0.0/77.54 GiB objects (0/1.0 GPUType:V100)
Result logdir: /home/ray_results/Campaign_Tiger-Deer-v1
Number of trials: 1 (1 ERROR)
+------------------------------------------+----------+-------+--------+------------------+------+----------+
| Trial name                               | status   | loc   |   iter |   total time (s) |   ts |   reward |
|------------------------------------------+----------+-------+--------+------------------+------+----------|
| PS_PPO_Trainer_Tiger-Deer-v1_41b65_00000 | ERROR    |       |      3 |          3576.35 | 3672 | 0.166667 |
+------------------------------------------+----------+-------+--------+------------------+------+----------+
Number of errored trials: 1

If I use the PettingZooEnv in version in ray==0.87, the error is https://github.com/ray-project/ray/blob/releases/0.8.7/rllib/env/pettingzoo_env.py#L165.

Lastly, I also applied the following SuperSuit wrappers: pad_observations_v0, pad_action_space_v0, agent_indicator_v0, and flatten_v0, and I'm running PettingZoo==1.3.3 and SuperSuit==2.1.0.

Thanks.

Issues with Hanabi Environment

Hi,

I'm currently running experiments training agents using rllib's PPO with hanabi. It appears as though a large number of the actions chosen by the agents end up being illegal moves, which causes the episode to end with an end-reward of 0. I've added a snippet of the output I encounter when running my experiment, below, where the agent (player_0) chooses an action (9) which is not in its legal_moves. I'm not entirely sure if this by design (e.g., have an agent learn what moves are legal over time) or by error (e.g., an agent should only be allowed to choose from the allowed actions), but it increases the amount for training needed. For instance, my last experiment ran for ~1.3M timesteps and ~15% (~200k) of the those were illegal moves.

[WARNING]: Illegal move made, game terminating with current player losing.
env.infos[player]['legal_moves'] contains a list of all legal moves that can be chosen.
{'player_0': {'legal_moves': [2, 3, 7, 8, 13], 'observations_vectorized': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}, 
'player_1': {'legal_moves': [], 'observations_vectorized': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}}

action_dict: {'player_0': 9, 'player_1': 0}
current agent and action: player_0, 9

Moreover, the PettingZoo docs for Hanabi (https://www.pettingzoo.ml/classic/hanabi) list the "Average Total Reward" as 0.0, but a standard game has rewards up-to 25. Could you clarify why the Average Total Reward is listed as 0.0 and not higher? For what it's worth, in my aforementioned experiment, I obtained rewards between [-4, 5], with a combined average (between the 2 agents/players) around -0.2, so i'm not entirely sure 0.0 is wrong.

I'm in the process of running a larger experiment (~20B timesteps) to see if performance improves. I'm currently using PPO's default config from rllib. I could always try changing the hyperpameters, but I wanted to make sure the environment itself is acting correctly. Alas, it may just be that PPO will not perform well on this environment, but I don't want to characterize it as such without some further testing.

I'm running ray==0.8.7 and pettingzoo=1.2.
Edit: forgot to mention I'm using ray's PettingZooEnv wrapper to interface between the agents and the hanabi env.

This is a fantastic library, btw!

Simultaneous actions

I'm not sure if I'm missing something, but the way the multiagent property is used seems to be by an agent iterator, as in during each step you iterate over all agents and choose their actions.

Is it possible to instead pass the actions for all agents in a given timestep at the same time? I feel like this could be much better for homogeneous agent cases, where it'd be beneficial to just compute the actions for all agents at once.

I'm thinking something like they did in rllib, where you pass the actions as a dictionary {"Agent0": 1, "Agent1": 3, ...}

Visual studio problem

Hello, I have anaconda an VS2019 installed on my windows machine. Unfortunately I can not install hanabi-learning-environment, it gives the following reason

Building wheel for hanabi-learning-environment (PEP 517) ... |
..
..

Trying "Visual Studio 15 2017 Win64 v141" generator - failure
...
..
ERROR: Failed building wheel for hanabi-learning-environment
Failed to build hanabi-learning-environment
ERROR: Could not build wheels for hanabi-learning-environment which use PEP 517 and cannot be installed directly

Why is he doing so?

Prospector bug

Getting this error message in prospector very, very rarely.

From cffi callback <function CollisionHandler._set_begin.<locals>.cf at 0x7fb3ddf3f598>:
Traceback (most recent call last):
  File "/home/ben/.virtualenvs/zoo/lib/python3.6/site-packages/pymunk/collision_handler.py", line 64, in cf
    x = func(Arbiter(_arb, self._space), self._space, self._data)
  File "/home/ben/class_projs/PettingZoo/pettingzoo/butterfly/prospector/prospector.py", line 459, in handoff_gold_handler
    # This collision handler is only for prospector -> banker gold handoffs
AttributeError: 'NoneType' object has no attribute 'parent_body'

It doesn't seem to cause a crash, and could probably be ignored, but it doesn't seem good just to leave it generating these errors.

MPE Render improvements

MPE environments shows communication by printing it to the terminal, and render multiple windows for every agent during render. All of this is mildly confusing and undesirable, but is beyond our scope to fix (i.e. putting things into a single window nicely conveying all information)

I found another issue, which may be part of the same problem, Rllib, or my understanding of the Tiger-Deer env:

I found another issue, which may be part of the same problem, Rllib, or my understanding of the Tiger-Deer env:

The padded action space and observation space are Discrete(9) and Box(9, 9, 25), respectively. When I flatten the observation space, I get Box(2025,). When I run through Rllib, though, I receive an assertion error because the observation spaces are not equal. The dimensions that Rllib receives are (9,9,146) and (11826,) for the flattened case. The error is being thrown from rllib/models/preprocessors.py#L60 calling gym/spaces/box.py#L125. I added a truncated stack trace below for the flattened case, where it expects a numpy array of size (2025,) but received one with shape (11826,).

I had caught this error early on and simply hardcoded the adjusted observations in my Rllib policies, but a recent code refactor my team did brought this issue back up.

ray.exceptions.RayTaskError(ValueError): �[36mray::RolloutWorker.par_iter_next()�[39m (pid=26611, ip=10.2.0.228)
  File "python/ray/_raylet.pyx", line 484, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 438, in ray._raylet.execute_task.function_executor
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/util/iter.py", line 1152, in par_iter_next
    return next(self.local_it)
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 288, in gen_rollouts
    yield self.sample()
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 579, in sample
    batches = [self.input_reader.next()]
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 93, in next
    batches = [self.get_data()]
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 209, in get_data
    item = next(self.rollout_provider)
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 604, in _env_runner
    perf_stats=perf_stats,
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 798, in _process_observations
    policy_id).transform(raw_obs)
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 165, in transform
    self.check_shape(observation)
  File "/home/anaconda3/envs/py37/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 63, in check_shape
    self._obs_space, observation)
ValueError: ('Observation outside expected value range', Box(2025,), array([0., 0., 0., ..., 0., 0., 0.], dtype=float32))

Originally posted by @jdpena in #209 (comment)

Manual control test crashing on newer versions of macOS

When running manual control tests, using pynput, the games all crash with an Illegal instruction: 4 error, indicating something wasn't built properly for macOS's new 64 bit features. This isn't pynput's fault either because it's a pure python package.

This also only happens with newer macs and macos catalina, which is the opposite of what is expected with illegal instruction 4 errors.

I have absolutely no idea what the problem here is.

Multiwalker observations violate Box bounds due sampling normal distribution

I believe I've come across a rare but inevitable bug with how multiwalker generates observations of neighboring walkers and the package. Since these observations are drawn from normal distributions to introduce state uncertainty, there is a small chance the sample will fall on the extreme tail of the distribution and violate the observation space Box bounds. With long enough training experiments, you will eventually encounter this rare occurrence.

Quick fix would simply be to clip the observations to the bounds.

No termination for illegal moves in classic

In the classic games, when an illegal move is taken, the player who takes the move is penalized, and the game ends. This style is taken from games like chess and go, however, perhaps a more appropriate way to handle it for reinforcement learning algorithms is to penalize without terminating the game.

This will make it easier to solve games because play continues even after illegal moves, creating more diverse observations and rewards.

It should also make it easier to reason about reward structure in games like Hanabi, where reward is allocated at many steps as the game progresses.

Is there a single player version of Atari?

Can I instantiate a single player version of Atari with PettingZoo?

Looks like

from pettingzoo.atari import pong_classic_v0
pong_classic_v0.env() 

it is still a multiplayer version.

Nope, I can't do gym.make('Pong-v0') because PettingZoo has 18 actions while the gym version only got 6.

SISL environments - ModuleNotFoundError: No module named 'Box2D'

I am trying to use the multiwalker_v1 environment but getting a ModuleNotFoundError. To reproduce:

pip install pettingzoo[all]

Then in a python script or interpreter

from pettingzoo.sisl import multiwalker_v1
.
.
.
~/miniconda3/envs/pettingzoo_rllib/lib/python3.8/site-packages/pettingzoo/sisl/multiwalker/multiwalker_base.py in <module>
      4 from gym import spaces
      5 from gym.utils import seeding
----> 6 import Box2D
      7 from Box2D.b2 import (circleShape, contactListener, edgeShape, fixtureDef, polygonShape,
      8                       revoluteJointDef)

ModuleNotFoundError: No module named 'Box2D'

Examples of RLlib QMIX trained in PettingZoo environments (e.g. prison)

In very much the same vein as this MADDPG issue, I was wondering if there are examples of using QMIX on PettingZoo environments. For QMIX, the key implementation detail seems to the agent grouping; see also the TwoStepGame example. However, the current PettingZoo environment in RLlib explicitly raises a NotImplementedError for the function with_agent_groups:
https://github.com/ray-project/ray/blob/d80e08ce95e4e3943ab6bdcfa84455ed0377fa06/rllib/env/pettingzoo_env.py#L207

Is there an existing pull request somewhere to implement with_agent_groups in RLlib, or is there some other way people use QMIX with PettingZoo environments?

I apologize for spamming the PettingZoo issues, especially because this more of an RLlib issue, but I thought this was more relevant for the widespread adoption of PettingZoo

Chess bug

The following code prints out a completely invalid board state. Probably a bug in python-chess, but rather hard to track down because it is fairly rare.

from pettingzoo.classic import chess
import time
import random

env = chess.env()

orig_obs = env.reset()
game_len = 0
for x in range(100000):
    reward, done, info = env.last()

    game_len += 1

    if game_len > 1000:
        print("long game")
        print(env.board)
        exit(0)
    if done:
        orig_obs = env.reset()
        game_len = 0
        break
    action = random.choice(env.infos[env.agent_selection]['legal_moves'])

    next_obs = env.step(action)

Invalid Observation Values with MAgent Envs

The default config in battle and battlefield causes the observation values to be outside the [0,2] allowed range, leading to a ValueError. Note the -0.105 in the error below. The obs shapes are correct.

ValueError: ('Observation outside expected value range', Box(6929,), array([ 0.    ,  0.    ,  0.    , ..., -0.105 ,  0.3875,  0.45  ],

I set step_reward, dead_penalty, and attack_penalty to be 0 (or any value between [0,2]) and it fixed the issue.

Edit: it appears all MAgent envs suffer from this issue except tiger-deer.

Cannot understand how env.reset() works in MPE environments.

Hi,

env.reset() returns an array of Box(16,) in multiagent scenarios. In the original version of the environment env.reset() would give a list of arrays, one for each agent and according to the observation space for each agent. For example, in the simple_tag environment the adversaries have an observation_space Box(16,) while the agent has Box(14,).

I also tried to pass the agent names inside the env.reset() function but all the returns were of size Box(16,).

So the question is: How to reset all the agents in the environment before the start of the training?

Maybe this is very basic, but I was used in the original version! Thank you in advance!

Update: I have a similar problem with the env.step() function as well. A feedback would be highly appreciated.

Adding contributing guidelines

It would be helpful if contributing guidelines are added which include steps such as, pre-commit checks, code style, etc.

Prospector error

I am getting this error in prospector:

   return self.env.observe(agent)                                                                                        
File "/home/ben/.virtualenvs/main_env/lib/python3.6/site-packages/pettingzoo/butterfly/prospector/prospector.py", line 739, in observe                                                                                                            
sub_screen, pad_width=((pad_x, 0), (0, 0), (0, 0)), mode="constant"                                                   
File "<__array_function__ internals>", line 6, in pad                                                                   
File "/home/ben/.virtualenvs/main_env/lib/python3.6/site-packages/numpy/lib/arraypad.py", line 746, in pad                
pad_width = _as_pairs(pad_width, array.ndim, as_index=True)                                                           
File "/home/ben/.virtualenvs/main_env/lib/python3.6/site-packages/numpy/lib/arraypad.py", line 517, in _as_pairs          
raise ValueError("index can't contain negative values")                                                            
 ValueError: index can't contain negative values      

Seeding

Hello,
While trying to use multi-walker I realized that there was no seed method even though the method is present in the base env. Is it a design choice?
I think that adding this method to all the environments would be nice. (And also on the RLlib wrapper)
Let me know what you think

gamma/knights_archers_zombies performance improvements

KAZ runs slower than all the other gamma games, and seemingly needs a large amount of refactoring to be able to run at the same speed as them. The problem seems to either be due to the collision detection, or the tremendous numbers of different dictionaries and lists used to keep track of objects in the game.

Enabling MAgent map_size as a Parameter

I see that map_size is fixed for each MAgent environment, even though it can be passed into the underlying game environment. Could this be added as an input parameter to the magent env wrappers or does the remainder of the env setup (e.g., number of agents) depend on a fixed size?

Where are the baselines?

The PettingZoo paper (which is still under review) states the following in section 6:

All environments implemented in PettingZoo include baselines to provide a general sense of the difficulty of the environment, and for something to initially compare against.

Are these baselines already available? And if so, where can we find them?

Improved install instructions and __version__

A few bumps I've found while starting to use PettingZoo:

  1. Need improved install instructions in README and website. For example, pip install pettingzoo and/or conda install pettingzoo. Also, some description of install with "extras", e.g. do I need to run pip install pettingzoo or pip install pettingzoo[all]?

  2. There seems to be an inconsistency between requirements.txt and setup.py. When I try to work through the ray pettingzoo example I get the error
    ModuleNotFoundError: No module named 'pygame' which had me confused because pygame is clearly in the requirements.txt. I then looked an setup.py to find the extras param. Is requirements.txt actually used? If not, can it be removed for clarity?

  3. I can't see a way to easily check the version of PettingZoo I have installed. My normal workflow of import pettingzoo; print(pettingzoo.__version__) gives error AttributeError: module 'pettingzoo' has no attribute '__version__'

Thanks for development on this library!

Communication in butterfly environment

Hi everyone,

Is there a way to add a communication channel between knights and archers in the KAZ environment? Like adding another action to the action space of each agent and enabling the communication between these two teams?

MPE Manual Control

The original MPE implementation had manual control as an option, but it was removed due to our overhaul of the render method. Ideally this would be readded.

[rllib] PettingZoo's No-limit Texas Holdem environment compatibility

Hello,

I am trying to use rllib's algorithms on PettingZoo's No-limit Texas Holdem environment. From what I understand in rllib's documentation, there is a PettingZooEnv class in order to make these environments compatible with rllib. But this class can be used only for agents that act simultaneously, and not for turn-based games like Poker or Chess, which fall under the classic category of PettingZoo's environments. Additionally, these games have also illegal moves that alter the action space in each step.

Therefore, is there any solution to this problem? Any workaround that would prove useful to this compatibility problem? Alternatively, is there any other library with the particular environment (No-limit Texas Holdem) that I could use instead with rllib?

The same issue was posted on ray's repo: https://github.com/ray-project/ray/issues/11072

Thank you in advance.

SISL Performance Optimizations

The SISL games run much slower than similar environments, without being dramatically more complex. There's nothing glaringly wrong, but we would appreciate PRs with more nuanced speed improvements. Our goal is to get similar performance (with utils.performance_benchmark) to the butterfly games.

Action space for discrete games (focused on chess)

For discrete games like chess, we want to support AlphaZero like learners somehow, but also more general learners should be able to learn reasonable policies.

Requirements:

  1. We want our action space to include all possible moves, legal or not, laid out in a convenient way for a neural network to generate (i.e. a 3d image)
  2. We want the action space to be discrete, so that the system is responsible for choosing its own way of sampling actions, and knows precisely which action it chose.
  3. We want to be able to deal with illegal moves in a reasonable way.

Note that desiderata number 1 conflicts with number 2, because a discrete action space does not have spacial properties, it is just flat.

Penalizing illegal moves

In chess games between humans, there are two possible ways to deal with illegal moves.

  • In one version, the board is set back to its position before the illegal move, and the player is penalized with a few minutes taken off the clock.
  • In another version, the game is over and the player who made the illegal move loses.

The problem is that the action space in chess has very rare moves, like "capture left with pawn and underpromote to knight". This move is legal less than 1 out of 1000 turns, making it very difficult to ever explore if it is penalized 999/1000 times. Some moves may not be legal ever, just because of how the action space is constructed.

But this may be fine for our purposes.

Full AlphaZero compatibility

  • AlphaZero expects to be able to get the list of all legal moves, and be able to map this list back to the action space.

We currently have no way to generate the list of all legal moves. If we decide to penalize illegal moves, AlphaZero could be implemented with brute force search over the action space, but it may be very slow.

AssertionError when calling env.step() in prison_v0 environment with a continuous action space

Error:

Action: [1.8698319]
Action: [-0.04900217]
Action: [-2.125496]
Action: [0.01161803]
Action: [0.27126586]
Action: [0.41918978]
Action: [-1.5011265]
Action: [2.418947]
Traceback (most recent call last):
File "test_pettingzoo.py", line 19, in
observation = env.step(action)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 306, in step
return super().step(action, observe)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 95, in step
super().step(action, False)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 60, in step
next_obs = self.env.step(action, observe=observe)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 177, in step
return super().step(action, observe)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 60, in step
next_obs = self.env.step(action, observe=observe)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 252, in step
return super().step(action, observe)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/utils/wrappers.py", line 60, in step
next_obs = self.env.step(action, observe=observe)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/butterfly/prison/prison.py", line 334, in step
self.draw()
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/butterfly/prison/prison.py", line 267, in draw
self.screen.blit(self.prisoners[p].get_sprite(), self.prisoners[p].position)
File "/home/gauraang/.local/lib/python3.6/site-packages/pettingzoo/butterfly/prison/prison.py", line 59, in get_sprite
assert False, ("INVALID STATE", self.state)
AssertionError: ('INVALID STATE', array([1.], dtype=float32))

Code to reproduce the error:

import time
from pettingzoo.butterfly import prison_v0
env = prison_v0.env(seed=1, continuous=True)

observation = env.reset()
for agent in env.agent_iter():
    reward, agent_done, info = env.last()
    action = env.action_spaces[agent].sample()
    print('Action:', action)
    observation = env.step(action)

    env.render()

Agent Death Problems

So we’ve been contemplating how to sanely fix everything that’s caused the recent wave of issues regarding environments with agent death. The fundamental problem is how we’re handling agent iter. So we’re going to fix agent_iter, and add several more minor API improvements relevant to agent death that we probably should’ve done before.

The primary problem with agent_iter is that it turns out that having a single wrapper that takes care of it for all environments can’t be done in a reasonable way for all types of environments and ones with agent death. So we’re going to move it into individual environments and put frequently used things in helper functions. That’s the big change.

We’re also going to remove dead agents from observations/rewards/dones/infos/agents and put them behind reset, force the post-terminal steps to only accept None, and add another attribute listing all possible agents (since agents will be mutable). Those shouldn’t affect you guys if we do them right, and in retrospect should’ve been done to begin with.

All this should allow for the clean handling of any conceivable variable agent scenario (not just death) This should also result in modest performance improvements. We’ll also update the RLlib wrapper and SuperSuit accordingly, and add new tests pertaining to these changes.

This should probably take us like a week to fully sort out. We obviously may introduce new bugs in the process, but this will solve the real underlying issues. Where a lot of this comes from is that we study purely cooperative emergent behavior, so things like optimally handling low level reward manipulation was at the top of our minds in development, whereas we clearly didn't contemplate every aspect of agent death.

@rallen10 @jdpena

End-to-end working example?

Do you have an end to end working example with visualization?

After installing the library with pip, do you have working code that would train (even with random actions) a simple environment and then visualize it?

Sprite upgrades to gamma/prison

The background sprite styles don’t have the same uniform aesthetic with the creature sprites that the other games do. We’d welcome a PR that gave prison a more unified and polished graphical appearance.

Better performance test

Hi!. I've noticed that the current performance testing at pettingzoo/tests/performance_benchmark.py runs for only 5 seconds and is run only once. Wouldn't it be better if the test is run multiple times with a longer runtime? That way we can report the results as value ± error.

I'm happy to make a PR if this is valid.

PettingZoo Atari ".spec" module

Hello

I am a big fan of your repo. I am looking to see if there ".spec" for the Atari games. For instance, in the gym.make('Pong-v0'), its spec is:

gym.make('Pong-v0')
dir(env.spec)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_entry_point', '_env_name', '_kwargs', 'id', 'make', 'max_episode_steps', 'nondeterministic', 'reward_threshold', 'tags']

I am wondering if there anything equivalent for the MultiAgent Atari as well?

EnvLogger Functionality

Opening this up as a separate discussion threat for some question related to the EnvLogger and functionality testing that just came up in my head.

@weepingwillowben:

  1. In general, what's the EnvLogger supposed to do?

  2. I briefly read, that the logger is used to log if step() or observe() is called before initially reset() is called. In general, couldn't this be made obsolete by calling reset() right from the init().

  3. What is test_api supposed to do? Intuitively I thought it only tests if an environment complies with the common API design. But judging from reading it, it also tests internal functionality, such as (e.g. here ). I suggest letting test_api test against everything defined in the AECEnv (as opposed to logic defined in inheriting classes), e.g. return value types, legal call orders, etc.
    For that to be user friendly (hence drive repo adoption), those expectations you test against should be outlined upfront in AECEnv.

  4. Reading test_api.py as well as test_chess.py, I was wondering if you are familiar with test frameworks (e.g. unittest, pytest). They could be helpful if you want structured testing.

Best
Clemens

Caching of dependencies in CI

I noticed that a large amount of time in CI is used while installing and building wheels (especially OpenCV). I think the caching option from Travis could be of great help in reducing the build time. Link here.

Does multiwalker only process/update a single agents observation in rllib pettingzoo_env.py?

Sorry again for asking an rllib question in the pettingzoo repo.

I am trying to trace down exactly how observations are computed for each agent in multiwalker when using the pettingzoo_env.py wrapper in rllib. It seems as though only one agent's observation per cycle would be updated in pettingzoo_env.step and all others would be stale and incorrect. Starting from that step function in rllib and working "backward" to observation functions in petting zoo, I trace the following lines of code:

This last line, which updates self.last_obs[agent_id], only gets called if is_last is True. This is set here:

But this makes it look like is_last is only True for the last agent in the cycle, which only occurs once in the cycle of agents where step is called in pettingzoo_env.py. All other agents which are not the last would then never have self.last_obs[agent] updated.

Therefore how are all the non-last agents' observations updated since _agent_selector.is_last() only returns True for one agent?

pursuit `train_pursuit` parameter

It is not clear why someone would even want to train evaders to evade randomly moving pursuers. This seems like a very easy task that requires no coordination, defeating the entire purpose.

Also, right now there are a few issues with it:

Setting the train_pursuit parameter to False in pursuit does change things in the base_pursuit.py. However, the agents are still named pursuer_0, etc, and there are only 8 of them instead of 20.

This is not ideal. Also need to investigate whether evader removal is implemented correctly.

Hanabi: Obs & Action Space, Config Parameters, and Updating Docs/Docstrings

Hey fellas,

TLDR; does PettingZoo's hanabi_v0 have the exact same observation and action space as DeepMind's (DMs) hanabi-learning-environment (HLE), and does it properly handle config parameters?

I was trying to run some hanabi experiments and I just wanted to confirm a few things since I unfortunately haven't been able to find any thorough documentation on the observation and action space of DM's HLE.

I see PettingZoo's hanabi_v0 env wraps around DM's HLE. When I run the hanabi_v0 environment, however, I get an observation space size of 373 (as noted in your hanabi docs), and when I run DM's hanabi-learning-environment/examples/game_example.py, I get an observation space of size 658.

I dug around both codebases to find the differences and it looks like it's all due to the initial setup of the game. hanabi_v0 initializes the hand_size parameter to 2, whereas the default parameter in HLE is 5 (hanabi_lib/hanabi_game.cc#L147). If I pass {"hand_size": 2} as the argument in hanabi-learning-environment/examples/game_example.py#L113, I obtain a matching observation space of size 373.

I would like to set hand_size=5 in hanbi_v0, but I don't see the parameters listed as options in PettingZoo's hanabi docs. Does hanabi_v0 allow for the additional parameters? The function signature and docstrings in classic/hanabi/hanabi.py (hanabi.py#L54 and hanabi.py#L60, respectively) suggest it is possible to pass the arguments. I'm just not sure if this will break something under-the-hood.

Moreover, the docstrings in hanabi.py seem slightly incomplete and/or incorrect. For example, the "Common game configurations" do not match their HLE counterparts, e.g., "Hanabi-Small" in hanabi_v0's docstring in classic/hanabi/hanabi.py#92 vs. HLE's actual config in hanabi_learning_environment/rl_env.py#L552, among others.

But I digress..

I can't find any thorough documentation on HLE's observation and action space -- the PettingZoo hanabi docs are the most complete I have found.

I have looked through the DM's HLE code that defines the observation space (hanabi_lib/hanabi_observation.h, hanabi_lib/hanabi_observation.cc, and hanabi_lib/canonical_encoders.cc) and it seems very similar to what's defined in the hanabi docs.

I don't see any special/additional transformations of the observations in hanabi_v0, so I just want to make sure they are equivalent and ordered the same, even when changing hanabi_v0's configs (e.g., hand_size).

Thanks,

Jaime

Enable HTTPS on pettingzoo.ml

The website pettingzoo.ml is currently only accessible over HTTP. Enabling HTTPS access is a relatively straightforward thing to do nowadays (via, say, Let's Encrypt). Please could HTTPS be enabled for the domain?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.