Giter VIP home page Giter VIP logo

skylark0924 / rofunc Goto Github PK

View Code? Open in Web Editor NEW
424.0 5.0 45.0 1.88 GB

🤖 The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Home Page: https://rofunc.readthedocs.io

License: Apache License 2.0

Python 99.58% CMake 0.13% Shell 0.11% Batchfile 0.01% HTML 0.17%
optitrack xsens planning-algorithms imitation-learning learning-from-demonstration robot isaac-gym humanoid-robots reinforcement-learning-algorithms robot-manipulation

rofunc's Introduction


Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@software{liu2023rofunc, 
          title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
          author = {Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},
          year = {2023},
          publisher = {Zenodo},
          doi = {10.5281/zenodo.10016946},
          url = {https://doi.org/10.5281/zenodo.10016946},
          dimensions = {true},
          google_scholar_id = {0EnyYjriUFMC},
}

Warning

If our code is found to be used in a published paper without proper citation, we reserve the right to address this issue formally by contacting the editor to report potential academic misconduct!

如果我们的代码被发现用于已发表的论文而没有被恰当引用,我们保留通过正式联系编辑报告潜在学术不端行为的权利。

Update News 🎉🎉🎉

v0.0.2.6 Support dexterous grasping and human-humanoid robot skill transfer

  • [2024-06-30] 🎉🚀 Human-level skill transfer from human to heterogeneous humanoid robots have been completed and are awaiting release.
  • [2024-01-24] 🚀 CURI Synergy-based Softhand grasping tasks are supported to be trained by RofuncRL.
  • [2023-12-03] 🖼️ Segment-Anything (SAM) is supported in an interactive mode, check the examples in Visualab (segment anything, segment with prompt).
  • [2023-10-31] 🚀 RofuncRL: A modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks is released. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth.
  • ...
  • If you want to know more about the update news, please refer to the changelog.

Installation

Please refer to the installation guide.

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note ✅: Achieved 🔃: Reformatting ⛔: TODO

Data Learning P&C Tools Simulator
xsens.record DMP LQT config Franka
xsens.export GMR LQTBi logger CURI
xsens.visual TPGMM LQTFb datalab CURIMini 🔃
opti.record TPGMMBi LQTCP robolab.coord CURISoftHand
opti.export TPGMM_RPCtl LQTCPDMP robolab.fk Walker
opti.visual TPGMM_RPRepr LQR robolab.ik Gluon 🔃
zed.record TPGMR PoGLQRBi robolab.fd Baxter 🔃
zed.export TPGMRBi iLQR 🔃 robolab.id Sawyer 🔃
zed.visual TPHSMM iLQRBi 🔃 visualab.dist Humanoid
emg.record RLBaseLine(SKRL) iLQRFb 🔃 visualab.ellip Multi-Robot
emg.export RLBaseLine(RLlib) iLQRCP 🔃 visualab.traj
mmodal.record RLBaseLine(ElegRL) iLQRDyna 🔃 oslab.dir_proc
mmodal.sync BCO(RofuncIL) 🔃 iLQRObs 🔃 oslab.file_proc
BC-Z(RofuncIL) MPC oslab.internet
STrans(RofuncIL) RMP oslab.path
RT-1(RofuncIL)
A2C(RofuncRL)
PPO(RofuncRL)
SAC(RofuncRL)
TD3(RofuncRL)
CQL(RofuncRL)
TD3BC(RofuncRL)
DTrans(RofuncRL)
EDAC(RofuncRL)
AMP(RofuncRL)
ASE(RofuncRL)
ODTrans(RofuncRL)

RofuncRL

RofuncRL is one of the most important sub-packages of Rofunc. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth. Here is a list of robot tasks trained by RofuncRL:

Note
You can customize your own project based on RofuncRL by following the RofuncRL customize tutorial.
We also provide a RofuncRL-based repository template to generate your own repository following the RofuncRL structure by one click.
For more details, please check the documentation for RofuncRL.

The list of all supported tasks.
Tasks Animation Performance ModelZoo
Ant
Cartpole
Franka
Cabinet
Franka
CubeStack
CURI
Cabinet
CURI
CabinetImage
CURI
CabinetBimanual
CURIQbSoftHand
SynergyGrasp
Humanoid
HumanoidAMP
Backflip
HumanoidAMP
Walk
HumanoidAMP
Run
HumanoidAMP
Dance
HumanoidAMP
Hop
HumanoidASE
GetupSwordShield
HumanoidASE
PerturbSwordShield
HumanoidASE
HeadingSwordShield
HumanoidASE
LocationSwordShield
HumanoidASE
ReachSwordShield
HumanoidASE
StrikeSwordShield
BiShadowHand
BlockStack
BiShadowHand
BottleCap
BiShadowHand
CatchAbreast
BiShadowHand
CatchOver2Underarm
BiShadowHand
CatchUnderarm
BiShadowHand
DoorOpenInward
BiShadowHand
DoorOpenOutward
BiShadowHand
DoorCloseInward
BiShadowHand
DoorCloseOutward
BiShadowHand
GraspAndPlace
BiShadowHand
LiftUnderarm
BiShadowHand
HandOver
BiShadowHand
Pen
BiShadowHand
PointCloud
BiShadowHand
PushBlock
BiShadowHand
ReOrientation
BiShadowHand
Scissors
BiShadowHand
SwingCup
BiShadowHand
Switch
BiShadowHand
TwoCatchUnderarm

Star History

Star History Chart

Related Papers          

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023|Code coming soon)
@inproceedings{liu2023softgpt,
               title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
               author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               pages={4920--4925},
               year={2023},
               organization={IEEE}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@inproceedings{liu2023birp,
               title={Birp: Learning robot generalized bimanual coordination using relative parameterization method on human demonstration},
               author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 62nd IEEE Conference on Decision and Control (CDC)},
               pages={8300--8305},
               year={2023},
               organization={IEEE}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL
  5. DexterousHands

Planning and Control

  1. Robotics codes from scratch (RCFS)

rofunc's People

Contributors

1am5hy avatar ddonatien avatar drawzeropoint avatar hawkeex avatar lee950507 avatar reichenbar avatar rip4kobe avatar skylark0924 avatar yyyyyt123 avatar zainzh avatar zhihaoairobotic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

rofunc's Issues

Encountered error while trying to install package. box2d-py

When running the installation command

pip install -r requirements.txt

You may obtain this following error

  Running setup.py install for box2d-py ... error
  error: subprocess-exited-with-error
  
  × Running setup.py install for box2d-py did not run successfully.
  │ exit code: 1
  ╰─> [18 lines of output]
      Using setuptools (version 65.6.3).
      running install
      /home/clover/anaconda3/envs/rofunc/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
        warnings.warn(
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-cpython-38
      creating build/lib.linux-x86_64-cpython-38/Box2D
      copying library/Box2D/Box2D.py -> build/lib.linux-x86_64-cpython-38/Box2D
      copying library/Box2D/__init__.py -> build/lib.linux-x86_64-cpython-38/Box2D
      creating build/lib.linux-x86_64-cpython-38/Box2D/b2
      copying library/Box2D/b2/__init__.py -> build/lib.linux-x86_64-cpython-38/Box2D/b2
      running build_ext
      building 'Box2D._Box2D' extension
      swigging Box2D/Box2D.i to Box2D/Box2D_wrap.cpp
      swig -python -c++ -IBox2D -small -O -includeall -ignoremissing -w201 -globals b2Globals -outdir library/Box2D -keyword -w511 -D_SWIG_KWARGS -o Box2D/Box2D_wrap.cpp Box2D/Box2D.i
      error: command 'swig' failed: No such file or directory
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> box2d-py

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.

Teaching learning

Thank you. I would like to ask what learning materials or papers are available for demonstration learning.of course,your code is very good information.

Xsens document

Require to finish the document of Xsens and Manus gloves

KeyError 'jL5S1' in constructing custom human model from Xsens data

作者您好,在成功复现rofunc相关功能后,近期开始使用rofunc框架对我们的数据进行处理。我们使用【Construct custom human model from Xsens data】中的代码对录制Xsens数据进行建模后显示报错KeyError: 'jL5S1' ,是否是因为所使用Xsens版本有细微的差别呢?想请问一下是否有对应的数据预处理解决方案?具体报错及isaacgym模块运行情况如下图所示,其中isaacgym界面会出现机器人的影子,但无法移动界面,点击相关按钮会出现提示【isaacgym无响应】。
sunlogin_20230925145920
sunlogin_20230925150458
sunlogin_20230925150512

LQT document

  • LQT
  • LQT-bimanual
  • LQT-control primitive
  • LQT-feedback

BCO document

Require to finish the document of behavior cloning from observation (BCO)

Regarding the difference between the TP-GMM implementation using MATLAB and Python

Dear authors,

Hello, thanks for your great work. I am working on the improvement of TP-GMM proposed by Sylvain Calinon. I need to implement it using Python and find the Rofunc package useful. I have several questions regarding the TP-GMM implementation.

  1. I find you and Sylvain both implement TP-GMM using HMM, instead of GMM, which is not consistent with its definition and Matlab version. I know HMM definitely works here, but can you explain why to use it rather than GMM?
  2. You use the marginal_model method to get the local GMM model in each frame. Looking at its definition, we can find the local model would have the same priors as the original model. But when the original HMM model is trained through EM, the priors are not updated and only the init_priors are updated. So I think the priors of local GMM models are not correct in this way.
  3. In the reproduce method, why do you use LQR to estimate, instead of directly using GMR? The same question applies to the uni method of your TP-GMR implementation.

I initially referred to the Matlab version of TP-GMM provided by Sylvain. It is more detailed and consistent with the description in his papers. I do not understand why the Python implementation is a little different from Matlab. I find the Rofunc package keeps this difference due to the use of the pbd package. I am confused about why not just use GMM to implement TP-GMM. So I asked the above questions.

Thanks for your answers in advance.

libcublasLt.so.11 not defined

After installed from source, here's the error:

(rofunc) ubuntu@ymh:~/Rofunc/rofunc/examples/simulator$ python curi_track_traj.py
Importing module 'gym_38' (/home/ubuntu/anaconda3/envs/rofunc/lib/python3.8/site-packages/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/ubuntu/anaconda3/envs/rofunc/lib/python3.8/site-packages/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
Traceback (most recent call last):
  File "curi_track_traj.py", line 10, in <module>
    import rofunc as rf
  File "/home/ubuntu/Rofunc/rofunc/__init__.py", line 13, in <module>
    from .lfd import ml, dl, rl
  File "/home/ubuntu/Rofunc/rofunc/lfd/rl/__init__.py", line 3, in <module>
    from rofunc.lfd.rl.online import PPOAgent
  File "/home/ubuntu/Rofunc/rofunc/lfd/rl/online/__init__.py", line 1, in <module>
    from .ppo_agent import PPOAgent
  File "/home/ubuntu/Rofunc/rofunc/lfd/rl/online/ppo_agent.py", line 12, in <module>
    import torch
  File "/home/ubuntu/anaconda3/envs/rofunc/lib/python3.8/site-packages/torch/__init__.py", line 191, in <module>
    _load_global_deps()
  File "/home/ubuntu/anaconda3/envs/rofunc/lib/python3.8/site-packages/torch/__init__.py", line 153, in _load_global_deps
    ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
  File "/home/ubuntu/anaconda3/envs/rofunc/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /home/ubuntu/anaconda3/envs/rofunc/lib/python3.8/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: symbol cublasLtHSHMatmulAlgoInit version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference

Problem in generation of new trajectories using learned models (TPGMMRPCtrl)

I am making a comparison between learning using TPGMMBi and TPGMMRPCtrl as shown in Figure 3 of Birp's paper. I am using the data provided in the Rofunc example (example_tpgmmbi_RPCtrl.py) but I am having problems with the process of generating new trajectories using a model that has already been learned.

The learning process does correctly in both cases but when I call Representation.generate() with new data for the Task Parameterized the TPGMMRPCtrl is unable to function correctly. First of all I have made a modification because as the code was initially, the rest of the demonstrations always came out. This was because the data of lqr.x0_l, lqr.x0_r and lqr.x0_l inside of the function _bi_reproduce() were always filled by the values of self.repr_l.demos_xdx[show_demo_idx][0], self.repr_r.demos_xdx[show_demo_idx][0] and lself. repr_c.demos_xdx[show_demo_idx][0] respectively which made it use only the demos data as initial data (as it happens when using Repr.reproduce()). To solve this when calling reproduce I used the data stored in the task parameters as shown in the picture.

A

The value of lqr.x0_c has been also modified to use the self.repr_c.task_params["frame_origins"][show_demo_idx] value (IMPORTANT: here i dont need to use the [0] at the end due to the form of generate the value, but the structure is the same that in the other task values). This value has been generated using the code in the following image (lines 491-493), as it was seen that by default self.repr_c.demos_xdx[show_demo_idx][0] was used, which generated incorrect values as it uses the relationship of the demo data which changes when we want to use other parameters, therefore what is shown in the image generates results of the relationship between r and l depending on the task parameters data used in Repr.generate().

B

By making these modifications, it has been achieved that the results of the initial data (which are stored in the values of X0) are taken into account by the new trajectories using the learned models, but the end points of the tasks are not taken into account. My question is whether this is an incorrect step in applying RoFunc or whether it is an internal bug that has to be solved.
I attach visual results as well as the values of the trajectories where it can be seen that the initial point is taken into account but the final point is not.

Data numerically compared for the left trajectory: The result shows that the initial point is correctly taked into account but the final point no:

C

Data generated with the initial demos: Thst is generated by Repr.reproduce() and it is correct.

D

Data generated with new task parameters: Thst is generated by Repr.generate() and it is incorrect.

E

Finally, the part of the code where i have done the called for the Repr.generate()

G

Thank you very much for your help and sorry for the very long issue.

URDF document

Write the URDF document using markdown, and save it in doc/source/simulator

Problems with Stable Version and Nightly version

Good morning, my name is Adrián Prados, PhD student at the University of Carlos III. I am currently working on imitation learning and I would like to use your framework to work with your datasets as well as to implement my own Imitation Learning algorithms. I have tried to install both versions, the stable and the nigthl version and both have given me problems. For the installation I have been following your steps .
There are problems in the initialisations of the different simulators, there are repeated functions that contradict each other (the init() function is defined several times in the same class generating problems) and also the function calls have changed with respect to the documentation. I was wondering if this is something temporary or if there is some kind of problem or bug, since I can start working with at least the stable version.

Additionally I would also like to comment that the documentation is quite incomplete and there is not much information on how to start working in more depth with this framework. I don't know if it's still under development or if it's a problem between versions or a bug.

In any case I would like to know if there is any way to solve it to be able to start working with this framework as it seems interesting, useful and relatively quick to use.

Rofunc RL

Online RL algorithms

  • PPO
  • SAC
  • TD3
  • A2C

Offline RL algorithms

  • CQL
  • Decision-Transformer
  • EDAC
  • TD3BC

Mixline RL algorithm

  • AMP
  • ASE
  • Online Decision Transformer
  • SoftGPT

dependency conflicts

作者您好,近期在尝试复现rofunc的simulator部分,在安装isaacgym-1.0rc4-py3-none-any.whl文件时涉及到tensorflow的安装,报错【ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
filelock 3.12.3 requires typing-extensions>=4.7.1; python_version < "3.11", but you have typing-extensions 4.5.0 which is incompatible.】显示typing-extensions 需要大于等于4.7.1;但是安装typing-extensions 4.7.1却会报错【ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.13.0 requires typing-extensions<4.6.0,>=3.6.6, but you have typing-extensions 4.7.1 which is incompatible.】,显示又需要低版本typing-extensions。请问这种问题是否有相应的解决方案?还是我在isaacgym模块的安装上存在问题呢?非常感谢你能看到此处~

npy files in LQT with control primitives and DMP

作者您好,最近希望借助【LQT with control primitives and DMP】中的代码对我们的数据进行处理,进行试验后发现这一节所采用的示例文件S.npy为三维numpy数组形式,但按照rofunc前期相应数据处理操作所得npy文件应为二维numpy数组形式,想请问如果需要运行【LQT with control primitives and DMP】应该对前期所得二维numpy数组形式的npy文件进行怎样的数据处理操作呢?自身在数据处理方面的知识较有欠缺,非常感谢您的项目带来的帮助~

Can't find libpython3.8.so.1.0

After installed from source, if we run the example directly, there will be the following error:

ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory, 

iLQR document

  • iLQR
  • iLQR-CoM
  • iLQR-feedback
  • iLQR-control primitive
  • iLQR-dynamics
  • iLQR-bimanual

run Franka simulator failed.

我想使用您库中的带有障碍物的轨迹规划功能,但遇到了问题。

这个1.py就是

import rofunc as rf
from isaacgym import gymutil

args = gymutil.parse_arguments()
args.use_gpu_pipeline = False

frankasim = rf.sim.FrankaSim(args)
frankasim.show()

运行报错

image

我不知道我的安装对不对,我在conda 3.8中 ubuntu22.04,rofunc是python setup.py install安装的。
这些脚本执行成功了

sh ./scripts/install.sh
# [Option] Install with baseline RL frameworks (SKRL, RLlib, Stable Baselines3)
sh ./scripts/install_w_baselines.sh

可以帮助我吗?

Simulator refactor

  • An easier-to-use simulator base class with cleaner logic, making it suitable for both constructing RL tasks and CPU-based planning.
  • Update all robot sim

Re-organize human embodiment demonstration data processing

Human embodiment demonstration data contains human whole-body motion and motion of the object being manipulated

  • Re-organize the datalab/poselib to fit the xsens suit, ensure that every joints are mapped properly
  • Re-align the object motion data from Optitrack to human whole-body motion data in device/mmdoal
  • Re-align the object motion with the human motion in the Isaac Gym simulator so that it conforms to the physical simulation, and provide the gap evaluation (the object motion should be the consequence of human manipulation)

Finish LQT variants

  • LQT
  • LQT feedback / recursive form
  • LQT with control primitive
  • LQT with control primitive in a feedback / recursive form by using DMP

Connect Rofunc & IsaacGym with RLlib

RLlib is a powerful Reinforcement learning framework that provides plenty of baseline RL algorithm implementations.

  • Standard gym-based RL environment using Isaac Gym
  • Simple scenarios to train

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.