Giter VIP home page Giter VIP logo

metal's Introduction

This repository is in maintenance mode as of 15 Aug. 2019. See Project Status for details.

Snorkel MeTaL

Build Status

v0.5.0

Snorkel MeTaL is the multi-task learning (MTL) extension of Snorkel prior to Snorkel v0.9, at which point the projects were merged.

Contents

Project Status

The Snorkel project is more active than ever! With the release of Snorkel v0.9 in Aug. 2019, we added support for new training data operators (transformation functions and slicing functions, in addition to labeling functions), ported the label model algorithm first introduced in Snorkel MeTaL, added a Snorkel webpage with additional resources and fresh batch of tutorials, simplified installation options, etc.

As part of that major release, we integrated the best parts of Snorkel MeTaL back into the main Snorkel repository (including flexible MTL modeling), and improved upon many of them. For those starting new projects in Snorkel, we strongly recommend building on top of the main Snorkel repository.

At the same time, we recognize that many users built successful applications and extensions on Snorkel MeTaL. For that reason, we will continue to make that code available in this repository. However, this repository is officially in maintenance mode as of 15 Aug. 2019. We intend to keep the repository functioning with its current feature set to support existing applications built on it but will not be adding any new features or functionality.

If you would like to stay informed of progress in the Snorkel open source project, join the Snorkel email list for relatively rare announcements (e.g., major releases, new tutorials, etc.) or the Snorkel community forum on Spectrum for more regular discussion.

Motivation

This project builds on Snorkel in an attempt to understand how massively multi-task supervision and learning changes the way people program. Multitask learning (MTL) is an established technique that effectively pools samples by sharing representations across related tasks, leading to better performance with less training data (for a great primer of recent advances, see this survey). However, most existing multi-task systems rely on two or three fixed, hand-labeled training sets. Instead, weak supervision opens the floodgates, allowing users to add arbitrarily many weakly-supervised tasks. We call this setting massively multitask learning, and envision models with tens or hundreds of tasks with supervision of widely varying quality. Our goal with the Snorkel MeTaL project is to understand this new regime, and the programming model it entails.

More concretely, Snorkel MeTaL is a framework for using multi-task weak supervision (MTS), provided by users in the form of labeling functions applied over unlabeled data, to train multi-task models. Snorkel MeTaL can use the output of labeling functions developed and executed in Snorkel, or take in arbitrary label matrices representing weak supervision from multiple sources of unknown quality, and then use this to train auto-compiled MTL networks.

Snorkel MeTaL uses a new matrix approximation approach to learn the accuracies of diverse sources with unknown accuracies, arbitrary dependency structures, and structured multi-task outputs. This makes it significantly more scalable than our previous approaches.

Installation

[1] Install anaconda: Instructions here: https://www.anaconda.com/download/

[2] Clone the repository:

git clone https://github.com/HazyResearch/metal.git
cd metal

[3] Create virtual environment:

conda env create -f environment.yml
source activate metal

[4] Run unit tests:

nosetests

If the tests run successfully, you should see 50+ dots followed by "OK". Check out the tutorials to get familiar with the Snorkel MeTaL codebase!

Or, to use Snorkel Metal in another project, install it with pip:

pip install snorkel-metal

References

Blog Posts

Q&A

If you are looking for help regarding how to use a particular class or method, the best references are (in order):

  • The docstrings for that class
  • The MeTaL Commandments
  • The corresponding unit tests in tests/
  • The Issues page (We tag issues that might be particularly helpful with the "reference question" label)

Sample Usage

This sample is for a single-task problem. For a multi-task example, see tutorials/Multitask.ipynb.

"""
n = # data points
m = # labeling functions
k = cardinality of the classification task

Load for each split:
L: an [n,m] scipy.sparse label matrix of noisy labels
Y: an n-dim numpy.ndarray of target labels
X: an n-dim iterable (e.g., a list) of end model inputs
"""

from metal.label_model import LabelModel, EndModel

# Train a label model and generate training labels
label_model = LabelModel(k)
label_model.train_model(L_train)
Y_train_probs = label_model.predict_proba(L_train)

# Train a discriminative end model with the generated labels
end_model = EndModel([1000,10,2])
end_model.train_model(train_data=(X_train, Y_train_probs), valid_data=(X_dev, Y_dev))

# Evaluate performance
score = end_model.score(data=(X_test, Y_test), metric="accuracy")

Note for Snorkel users: Snorkel MeTaL, even in the single-task case, learns a slightly different label model than Snorkel does (e.g. here we learn class-conditional accuracies for each LF, etc.)---so expect slightly different (hopefully better!) results.

Release Notes

Major changes in v0.5:

  • Introduction of Massive Multi-Task Learning (MMTL) package in metal/mmtl/ with tutorial.
  • Additional logging improvements from v0.4

Major changes in v0.4:

  • Upgrade to pytorch v1.0
  • Improved control over logging/checkpointing/validation
    • More modular code, separate Logger, Checkpointer, LogWriter classes
    • Support for user-defined metrics for validation/checkpointing
    • Logging frequency can now be based on seconds, examples, batches, or epochs
  • Naming convention change: hard (int) labels -> preds, soft (float) labels -> probs

Developer Guidelines

First, read the MeTaL Commandments, which describe the major design principles, terminology, and style guidelines for Snorkel MeTaL.

If you are interested in contributing to Snorkel MeTaL (and we welcome whole-heartedly contributions via pull requests!), follow the setup guidelines above, then run the following additional command:

make dev

This will install a few additional tools that help to ensure that any commits or pull requests you submit conform with our established standards. We use the following packages:

  • isort: import standardization
  • black: automatic code formatting
  • flake8: PEP8 linting

After running make dev to install the necessary tools, you can run make check to see if any changes you've made violate the repo standards and make fix to fix any related to isort/black. Fixes for flake8 violations will need to be made manually.

GPU Usage

MeTaL supports GPU usage, but does not include this in automatically-run tests; to run these tests, first install the requirements in tests/gpu/requirements.txt, then run:

nosetests tests/gpu

metal's People

Contributors

agnusmaximus avatar ajratner avatar bhancock8 avatar chmccreery avatar danich1 avatar dliangsta avatar inimino avatar jason-fries avatar jay2113853 avatar jdunnmon avatar nishithbsk avatar paroma avatar phiradet avatar senwu avatar vincentschen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metal's Issues

Easy support for searching over all EndModel sub-module hyperparams

E.g. handle model search not just over EndModel hyperparams, but also over the hyperparams of e.g. an LSTMModule used as input layer.

Right now this can be accomplished using a wrapper function, but would be nice to have a cleaner interface---e.g. where everything model & sub-module class is passed in and then recursively initialized each iteration of the Tuner search

Issue with OSX

Hi,

I got this issue when running MeTaL on Mac OS X.

E   RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.

There are some fixes online using either different version of python or modified python, but they are not ideal case. Is there a proper way to get rid of this issue?

cuda training support in classifier

Classifier in metal inherits from nn.Module and has its own internal _train() method. These work on the cpu but do not support cuda training as the Classifier.to(device) pattern from pytorch is ignored in the _train() code and the data is never sent to the gpu.

It looks like the _train code needs updating or other choices made to allow for clean cuda training

metric_score function should require pos_label value passed in

When I test Basics tutorial case, realize metric value reported by "score" function is not right. precision_score, recall_score and f1_score functions have "pos_label" argument set to 1 by default. In this case, labels passed in score function are composite of 1 and 2. pos_label should be set to 2 in this case. Otherwise, metric is calculated by mistake.

Regarding to MajorityLabelVoter, predict_prob could be sum(label_function_output) / total_label_function_count. This will generate probabilistic label. If k = 2, it could be continuous value in the range of [0, 1] instead of 0 or 1. This is to compare against Metal predict_prob values if draw AUC curves. Rank over labels are more useful than a single set of precision/recall/f1 score which assume a hard threshold of probability (it's 0.5 when k = 2).

On Jupyter notebook: Model Training Progress Bar breaks when you stop the kernel

Duplication steps (in a jupyter notebook):

  1. In a cell: create an EndModel and start training it
  2. click on the stop button while the model is training (the progress bars are correctly showing training is going on)
  3. rerun the cell.
  4. You'll see that the progress bar is now broken: it's printing multiple lines saying there's 0% progress.

@jdunnmon mentioned the solution is to use 'from tqdm import tqdm_notebook as tqdm'
in metal/metal/classifier.py

any reason we aren't using it?

https://stackoverflow.com/questions/42212810/tqdm-in-jupyter-notebook

Mac OS import issue

WIth snorkel-metal-0.2.3, we are experiencing this issue again. Any idea how to fix it?

    from matplotlib.backends import _macosx
E   ImportError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.
``

Embedding layer doesn't respect freeze parameter setting

Setting freeze=True when initializing the LSTMModule should freeze the weight params of the nn.Embedding layer object. However, embedding weights seem to update regardless of this setting's value.

The following code randomly initializes and freezes embedding weights.

lstm = LSTMModule(embed_size=50, 
                  hidden_size=100, 
                  vocab_size=train.word_dict.len(), 
                  embeddings=None, 
                  lstm_reduction='max', 
                  dropout=0.0, 
                  num_layers=1, 
                  freeze=True)

end_model = EndModel([200, 2], input_module=lstm, seed=123, use_cuda=False)

# print some embedding values
print(lstm.embeddings(torch.LongTensor([284])))

# train
end_model.train_model(train, dev_data=dev)

# print updated embedding values
print(lstm.embeddings(torch.LongTensor([284])))

The code snippet above generates this example output:

tensor([[ 0.0884, -2.0748, -0.9850,  0.7471, -0.7564, -0.6341,  0.1372, -0.1688,
          0.7940,  1.0784, -0.8383,  0.1061,  0.8362, -0.7522,  0.5439,  0.2746,
          0.2073, -0.9556,  0.6508, -1.0279, -0.9870,  0.8442, -0.3556, -0.2821,
         -0.4983,  2.3405,  1.8272,  0.9220, -0.9168, -0.4950,  1.4369,  1.0831,
          0.0357, -0.3696, -1.8521,  0.5179, -0.5741,  0.2665,  1.7202,  0.6760,
          1.0778,  1.9969, -0.0641, -1.1610, -0.8505,  0.1784,  2.1454,  1.0016,
         -0.5522,  0.4192]])

tensor([[-0.5792, -0.2044,  0.0411,  1.7785,  0.2299, -0.1451, -1.0534,  0.0911,
         -0.2252, -0.0150,  0.5948,  0.2380, -0.1490,  1.3786,  0.4968,  0.6150,
          2.0698, -1.7208,  0.0183,  0.0287,  1.8081,  0.3915, -0.1944,  0.5217,
          1.4828, -1.3012,  0.1629,  0.8666,  0.6013, -0.6721, -1.2223, -0.0163,
          1.7144, -1.2808,  0.2188, -0.2372, -1.6231, -0.7394, -1.1450,  0.3411,
          0.7713,  0.6864, -0.5711,  1.4876, -1.8473, -0.5141, -0.3369,  0.8294,
          0.4968, -0.6156]])

Setting L2 regularization (via end_model.implements_l2) doesn't change anything. Manually setting embeddings.weight.requires_grad doesn't have any effect. Any ideas why this is happening? These should generate identical values.

RuntimeError: PyTorch was compiled without NumPy support

This error happened when I was running the following codes in the fonduer-tutorials/hardware.

from metal.label_model import LabelModel

gen_model = LabelModel(k=2)
%time gen_model.train_model(L_train[0], n_epochs=500, print_every=100)

The whole error message was

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<timed eval> in <module>

/opt/conda/lib/python3.7/site-packages/metal/label_model/label_model.py in train_model(self, L_train, Y_dev, deps, class_balance, log_writer, **kwargs)
    417         l2 = train_config.get("l2", 0)
    418 
--> 419         self._set_class_balance(class_balance, Y_dev)
    420         self._set_constants(L_train)
    421         self._set_dependencies(deps)

/opt/conda/lib/python3.7/site-packages/metal/label_model/label_model.py in _set_class_balance(self, class_balance, Y_dev)
    359         else:
    360             self.p = (1 / self.k) * np.ones(self.k)
--> 361         self.P = torch.diag(torch.from_numpy(self.p)).float()
    362 
    363     def _set_constants(self, L):

RuntimeError: PyTorch was compiled without NumPy support

I think it's hitting a pytorch's known issue, which only happens on python 3.7.

My environment is as follows:

Python 3.7.1 | packaged by conda-forge | (default, Nov 13 2018, 18:15:35) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.__version__
'1.15.4'
>>> import torch
>>> torch.__version__
'0.4.1'
>>> import metal
>>> metal.__version__
'0.3.2'
>>> import fonduer
>>> fonduer.__version__
'0.4.0'
>>> 

Does it support no edge in multiple task TaskHierarchy?

Hi,

I got an error when trying to create TaskHierarchy with multiple cardinalities and no edge. The hypothesis is that I have multiple tasks but there is no dependency between tasks.

Here is an example code:

task_graph = TaskHierarchy(cardinalities=[2, 3, 3])

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-7-f731bcc2a935> in <module>()
      1 from metal.multitask import TaskHierarchy
      2 # task_graph = TaskHierarchy(cardinalities=[2,3,3], edges=[(0,1), (0,2)])
----> 3 task_graph = TaskHierarchy(cardinalities=[2, 3, 3])

~/.venv1/lib/python3.6/site-packages/metal/multitask/task_graph.py in __init__(self, *args, **kwargs)
     68 
     69     def __init__(self, *args, **kwargs):
---> 70         super().__init__(*args, **kwargs)
     71 
     72         # Check that G is a tree

~/.venv1/lib/python3.6/site-packages/metal/multitask/task_graph.py in __init__(self, cardinalities, edges)
     43 
     44         # Save the cardinality of the feasible set
---> 45         self.k = len(list(self.feasible_set()))
     46 
     47     def __eq__(self, other):

~/.venv1/lib/python3.6/site-packages/metal/multitask/task_graph.py in feasible_set(self)
     94                 while pt > 0:
     95                     ct = pt
---> 96                     pt = list(self.G.predecessors(pt))[0]
     97                     y[pt] = list(self.G.successors(pt)).index(ct) + 1
     98                 yield y

IndexError: list index out of range

Support customized weight for individual LF

This is Jason from Ant Financial. We are using snorkel/metal in couple of our core risk management products. It's a rule-engine based system. To detect black samples, there are bunch of rules and models are used as label functions. Some of them are just blacklists of user id or IP address. They are known as "strong rule" and don't follow majority voting policy. We need force target label to be 1 if any strong rule (LF) is hit no matter how other LFs behave. In my talk with Alex this morning, there is an easy way to set it up in L2 regulation factor for this purpose.

I appreciate if this can be done. Then we can customize the weight setting via interface. I'd like to issue a PR myself if instruction is given. Thanks!

Instability of label model

Hi,

Sorry for bothering you again.

I found some weird behavior in the label model where I run the label model twice with the same data and I got the totally different result. Here you can find a snapshot of my code.

screen shot 2018-09-01 at 10 54 35 am

Data: basics_tutorial.pkl
Code: snorkel-metal 0.1.1

Please let me know if I am doing something wrong.

Thanks,
Sen

Allow Different Label Functions Sizes for Multi-task learning

It would be nice if the multitask version supported differently label matrix sizes. When I was using a list of three sparse matrices: L[0]: (121, 34), L[1]: (121, 25) L[2]: (121, 25), the MTLabelModel resulted in a broadcast error (shown below). Is this a bug? If not, is it easy to add this feature?
screen shot 2018-10-08 at 6 41 30 pm

model.train() vs. eval()

Confirm that model.train() and model.eval() are called in such a way that batchnorm/dropout behave correctly

Should I expect the single task label model has the same answer with multiple task label model with single task input?

Hi,

I tried to use the same data I use for single task label model to multiple task label model with the same setting. Should I expect the same/similar result from these two settings?

Here is an example:

import pickle

with open("data/multitask_tutorial.pkl", 'rb') as f:
    Xs, Ys, Ls, Ds = pickle.load(f)

from metal.multitask import TaskHierarchy
task_graph = TaskHierarchy(cardinalities=[2], edges=[])

from metal.multitask import MTLabelModel
mt_label_model = MTLabelModel(task_graph=task_graph)

mt_label_model.train([Ls[0][0]], n_epochs=200, print_every=20, seed=123)
prob1 = mt_label_model.predict_proba([Ls[0][0]])

from metal.label_model import LabelModel
st_label_model = LabelModel(k=2, seed=123)
st_label_model.train(Ls[0][0], n_epochs=200, print_every=20)

prob2 = st_label_model.predict_proba(Ls[0][0])

prob1 is not the same as prob2.

Parallelize ModelTuner.search() with dask

At least for RandomTuner, there's no reason evaluations of different configs need to be in series. We should be able to keep the single-threaded and multi-threaded versions very similar by wrapping the init/train/eval block in a method than can be parallelized using dask.delayed or dask.future.

tensorboardX is not in the requirements-dev.txt file

When importing "from metal.contrib.logging.tensorboard import TensorBoardWriter" I got an error saying that tensorboardX can't be found.

I checked the requirements.txt file is not there so I installed it via 'pip install tensorboardx'. I was wondering if there was a reason not to include it in requirements.txt or if it's a bug.

Customizable loss reduction breaks assumptions

PR #75 introduced a customizable reduction for the loss function, but there are two places later in the code where we make the assumption that its the total loss (reduction='sum') when we go to calculate the average loss per example:

  1. running_loss = epoch_loss / (len(data[0]) * (batch_num + 1))
  2. train_loss = epoch_loss / len(train_loader.dataset)

I think we either need to set it to sum and allow post-processing if they want to combine in some other way, or add some special handling so that we're not dividing by train set size twice. I think I'd probably be in favor of always calculating some and then dividing later. What was the motivation for allowing it to be changed?

On MTL models: Would be great to pick best model based on a metric / task pair

In MTL you often have a target and an auxiliary task, and you actually care about how you're doing on your target task. So, as the model is training, it would be great if you could save the model that did best on the target task on a particular metric.

Currently it uses the mean amongst all tasks (for example, it will keep the model with the highest mean F1 across all tasks). What I'd like is to be able to pick the model with best F1 on a particular task

Infer Usefulness of Label Functions

When using the original snorkel package, I was able to plot the generative model's weights and see which label functions received higher weights compared to the others. Ideally, it be great to know how one could get this kind of information using metals version of label estimation. If this isn't incorporated, would it be feasible to implement?

Enable saving best model to disk during training

Would be great to be able to save the best models to disk during training in case the training fails or the Jupyter kernel dies at some point. That way you could load the model afterward and continue training without starting from scratch.

Move methods up Classifier class hierarchy

Some todos here:

  • Move train method up to Classifier- should be able to subsume the LabelModel train method?
  • Move various predict methods up from LabelModelBase to Classifier

Track design doc changes in git

Move from a google doc to a trackable format. Storing it in the repo will make it more likely to be updated when changes are made to the code.

Non-determinism in Scoring / Tuning

Currently dev checkpointing is non-deterministic, with the best model producing a different score when reloaded and evaluated at the end of a training loop. The model is also evaluated twice at the end of the loop.

....
[E:46]	Train Loss: 0.008	Dev score: 0.598
Saving model at iteration 47 with best score 0.612
[E:47]	Train Loss: 0.008	Dev score: 0.612
Saving model at iteration 48 with best score 0.622
[E:48]	Train Loss: 0.008	Dev score: 0.622
[E:49]	Train Loss: 0.007	Dev score: 0.593
Restoring best model from iteration 48 with score 0.622
Finished Training
Accuracy: 0.829
Precision: 0.470
Recall: 0.887
F1: 0.615
        y=1    y=2   
 l=1    55     62    
 l=2     7     279   
Tuner - using validation_metric=f1
F1: 0.612
        y=1    y=2   
 l=1    56     65    
 l=2     6     276   

In general, scoring is non-deterministic. Can scoring setup to be deterministic conditioned on the user provided seed, with something like:

# save rng state, seed for deterministic evaluation
rng_gpu = torch.cuda.random.get_rng_state_all()
rng_cpu = torch.random.get_rng_state()
torch.cuda.random.manual_seed_all(self.seed)
torch.random.manual_seed(self.seed)

# evaluate model here

# restore rng state to all devices
torch.cuda.set_rng_state_all(rng_gpu)
torch.random.set_rng_state(rng_cpu)

LabelModel cannot load because no c_tree attribute

When loading saved LabelModel, get the following error message:

''LabelModel' object has no attribute 'c_tree'' when trying to restore LabelModel from saved pickle. This is fixed by running train before loading data, but results are not consistent with saved model. Possibly related to #37.

DataLoader/batch support in `Classifier`

Currently, Classifier requires X_dev and Y_Dev for validation, rather than a DataLoader, which makes it difficult to use the code in situations where we have large memory requirements for the end model. Methods such as score, predict_proba, and _train should be updated to support DataLoader input, and simple cases where we do just have (X,Y) data should be provided as tuples, which can be converted into simple DataLoader objects under the hood.

Requires several potentially breaking changes, so for next minor version.

Make sure all documentation and functions consistent re: 0 or 1-indexing

In the LabelModel, we have 0 reserved as special value for abstains; then in subsequent EndModel calls, we use 1 indexing to be internally consistent. However we should do a cleaning pass to make sure that (a) this is consistent in documentation / checks / etc, and (b) doesn't trip up people just using the EndModel with non-probabilistic labels, who might expect 0-indexing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.