Giter VIP home page Giter VIP logo

dnc's Introduction

Differentiable Neural Computer (DNC)

This package provides an implementation of the Differentiable Neural Computer, as published in Nature.

Any publication that discloses findings arising from using this source code must cite “Hybrid computing using a neural network with dynamic external memory", Nature 538, 471–476 (October 2016) doi:10.1038/nature20101.

Introduction

The Differentiable Neural Computer is a recurrent neural network. At each timestep, it has state consisting of the current memory contents (and auxiliary information such as memory usage), and maps input at time t to output at time t. It is implemented as a collection of RNNCore modules, which allow plugging together the different modules to experiment with variations on the architecture.

  • The access module is where the main DNC logic happens; as this is where memory is written to and read from. At every timestep, the input to an access module is a vector passed from the controller, and its output is the contents read from memory. It uses two futher RNNCores: TemporalLinkage which tracks the order of memory writes, and Freeness which tracks which memory locations have been written to and not yet subsequently "freed". These are both defined in addressing.py.

  • The controller module "controls" memory access. Typically, it is just a feedforward or (possibly deep) LSTM network, whose inputs are the inputs to the overall recurrent network at that time, concatenated with the read memory output from the access module from the previous timestep.

  • The dnc simply wraps the access module and the control module, and forms the basic RNNCore unit of the overall architecture. This is defined in dnc.py.

DNC architecture

Train

The DNC requires an installation of TensorFlow and Sonnet. An example training script is provided for the algorithmic task of repeatedly copying a given input string. This can be executed from a python interpreter:

$ ipython train.py

You can specify training options, including parameters to the model and optimizer, via flags:

$ python train.py --memory_size=64 --num_bits=8 --max_length=3

# Or with ipython:
$ ipython train.py -- --memory_size=64 --num_bits=8 --max_length=3

Periodically saving, or 'checkpointing', the model is disabled by default. To enable, use the checkpoint_interval flag. E.g. --checkpoint_interval=10000 will ensure a checkpoint is created every 10,000 steps. The model will be checkpointed to /tmp/tf/dnc/ by default. From there training can be resumed. To specify an alternate checkpoint directory, use the checkpoint_dir flag. Note: ensure that /tmp/tf/dnc/ is deleted before training is resumed with different model parameters, to avoid shape inconsistency errors.

More generally, the DNC class found within dnc.py can be used as a standard TensorFlow rnn core and unrolled with TensorFlow rnn ops, such as tf.nn.dynamic_rnn on any sequential task.

Disclaimer: This is not an official Google product

dnc's People

Contributors

carusyte avatar cclauss avatar dm-jrae avatar jramapuram avatar shuolongbj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dnc's Issues

can't run train.py

I ran
ipython3 train.py
I got this:
screenshot from 2018-11-04 21-33-29

I installed sonnet from this command
$ pip install dm-sonnet

Can you post a modified version in which you can chose any task of your choice, not only the input reconstruction?

Hello, I want to train the network in a different task that the ones in the original article, in order to do that I would need to be able to feed the input and output of my choice during training. But in the code here it seems that input and output are embedded within the code, I could not find a way to decouple the task. Ideally the code should read the data from a file (and the code should allow to feed data in different encodings of our choice. Is this simple to add to the files? Thanks!!

What values are saved between episodes?

Hello everyone!

I read an interesting issue about memory module #19 and as I have understood, the controller's (LSTM Cell) memory is cleaned after each episode (i.e. after a sequence is processed). Then my question is: what values, which have to be learned, will not be cleaned and, hence, passed over between episodes?

range of key_strengths

Hey,
For the key strengths vectors (beta's), in NTM it is >0 ("a positive key strength" from section 3.3.1), while in DNC it is >=1 (output from a oneplus), is it a change by intention?
Also in the code in addressing.py, the read/write_strengths are by passing the controller_output through a Linear then through a softplus, which is just the log part of the oneplus, I could not locate where are they added by one additional 1?
Thanks in advance!

The version of sonnet

What's the version of sonnet used pls?
No sonnet.AbstractModule now.

class CosineWeights(snt.AbstractModule):
AttributeError: 'module' object has no attribute 'AbstractModule'

Confusion about the memory module

I have a very basic question and I will be grateful if someone could help.

From the code it seems that in every execution of _, loss = sess.run([train_step, train_loss]), the content of the memory, i.e., the initial state of DNC, will be re-initialized by zero_state. This means each instance of the RepeatCopy task is processed using an empty memory. However, from my understanding of the paper and memory networks in general, I believe the content of the memory should be incrementally updated given every instance of the task, and this collected knowledge should be used at test time.

If my understanding of the code is correct, it means the only thing that is incrementally trained is the controller, and it only learns how to use an empty memory to solve a task. Then in test time, it uses the memory as a temporary place to write intermediate processing results before taking action. On the other hand, if my understanding of the paper is correct, the controller and the memory content will both be incrementally trained. During test time, the controller matches the given task with what it already has in memory and decides what to do.

Could someone please clarify here which one is correct? If the memory content is actually incrementally collected during training, please point me to a part of code that I can see how it is done, because the training loop does not show any cue that it preserves the memory state during training steps.

I have seen in other instances of memory networks that the memory content is actually a TF Variable, and is preserved and incrementally updated during training, for example, refer to https://github.com/tensorflow/models/tree/master/learning_to_remember_rare_events.

Thank you very much for your help, in advance.

I seen tf.nn.dynamic_rnn in tensorflow documentation how do I prepare some example data csv and python time sereis info and full code?

I seen tf.nn.dynamic_rnn in tensorflow documentation how do I prepare some example data csv and python time sereis info and full code?

I feel there is not enough information to use this for me.

More generally, the DNC class found within dnc.py can be used as a standard TensorFlow rnn core and unrolled with TensorFlow rnn ops, such as tf.nn.dynamic_rnn on any sequential task.

Visualization Tools

Do you'll plan to make public the tools which were used to create the concise visualisations of the DNC memory in the Nature article?

If already done so, could you point me to them? It would save me (and probably a whole bunch of others) a lot of time from having to write the code for the visualisations from scratch.

If you can't make the tools public, could you point me to any frameworks/libraries that might save me some effort?

Thank you.

How to improve the use of cpu

I run the dnc example and found that in 32-core machines, the CPU usage is not high. Is there any way to make the CPU work at full capacity and improve efficiency? Thank you

Tensorboard graph visualization

Hi guys,

I am really excited about DNC and trying to understand its implemantation. In particular, I want to visualize the DNC's computational graph in Tensorboard. Unfortunately, the default graph shows not all nodes, e.g. I can not find nodes from addressing module ("Freeness" and "CosineWeights"). I am not a tensorflow and sonnet expert, but as I understand the documentation, all nodes defined in _build method must be added to the default graph, right?

So how can I get the full visualized graph?
Thanks a lot!

Word embedding usage

hi there
is it possible to use DNC to train a classifier base on text?, encoding every work with an embedding of course.

thanks

ipython train.py -- --memory_size=64 --num_bits=8 --max_length=3 NotFoundError

ipython train.py -- --memory_size=64 --num_bits=8 --max_length=3


NotFoundError Traceback (most recent call last)
/usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where)
202 else:
203 filename = fname
--> 204 builtin.execfile(filename, *where)

/usr/cep/np/dnc/train.py in ()
20
21 import tensorflow as tf
---> 22 import sonnet as snt
23
24 import dnc

/usr/local/lib/python2.7/dist-packages/sonnet/init.py in ()
100 from sonnet.python.ops import nest
101 from sonnet.python.ops.initializers import restore_initializer
--> 102 from sonnet.python.ops.resampler import resampler
103 from sonnet.python.ops.resampler import resampler_is_available

/usr/local/lib/python2.7/dist-packages/sonnet/python/ops/resampler.py in ()
31 # Link the shared object.
32 _resampler_so = tf.load_op_library(
---> 33 tf.resource_loader.get_path_to_datafile("_resampler.so"))
34
35

/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/load_library.pyc in load_op_library(library_filename)
62 # pylint: disable=protected-access
63 raise errors_impl._make_specific_exception(
---> 64 None, None, error_msg, error_code)
65 # pylint: enable=protected-access
66 finally:

NotFoundError: /usr/local/lib/python2.7/dist-packages/sonnet/python/ops/_resampler.so: undefined symbol: ZN10tensorflow15shape_inference16InferenceContext15WithRankAtLeastENS0_11ShapeHandleExPS2

Failed to initialize a dnc state with batch size=1

I am trying to construct a self-created state as init state for dnc:

init_cstate = tf.zeros_like(self.embeddings, dtype=tf.float32)

            self.initial_state = dnc.DNCState(
                controller_state=(init_cstate,self.embeddings),
                access_state=dnc_core._access.initial_state(32, tf.float32),
                access_output=tf.zeros(
                [32] + dnc_core._access.output_size.as_list(), tf.float32))

Above code snippet is a self-created initial state with batch size=32.
Then I create a placeholder with the same type of the initial state:

self.state_feed = snt.nest.map(lambda t:tf.placeholder(t.dtype,shape=t.shape,name="state_feed"),self.initial_state)
outputs, self.next_state = dnc_core(
                    inputs=self.input,
                    prev_state=self.state_feed)

however, I have to do inference with only one sample. I change the batch size from 32 to 1:

init_cstate = tf.zeros_like(self.embeddings, dtype=tf.float32)

            self.initial_state = dnc.DNCState(
                controller_state=(init_cstate,self.embeddings),
                access_state=dnc_core._access.initial_state(1, tf.float32),
                access_output=tf.zeros(
                [1] + dnc_core._access.output_size.as_list(), tf.float32))

When I run the code, it failed to load the model:

line 289, in build_model
    prev_state=self.state_feed)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/sonnet/python/modules/base.py", line 231, in __call__
    outputs, this_name_scope = self._template(*args, **kwargs)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/tensorflow/python/ops/template.py", line 268, in __call__
    return self._call_func(args, kwargs, check_for_new_variables=False)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/tensorflow/python/ops/template.py", line 217, in _call_func
    result = self._func(*args, **kwargs)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/sonnet/python/modules/base.py", line 167, in _build_wrapper
    output = self._build(*args, **kwargs)
  File "/home/floodsung/Documents/AC-for-image-captioning/sRNN_dnc/dnc.py", line 116, in _build
    prev_access_state)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/sonnet/python/modules/base.py", line 231, in __call__
    outputs, this_name_scope = self._template(*args, **kwargs)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/tensorflow/python/ops/template.py", line 268, in __call__
    return self._call_func(args, kwargs, check_for_new_variables=False)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/tensorflow/python/ops/template.py", line 217, in _call_func
    result = self._func(*args, **kwargs)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/sonnet/python/modules/base.py", line 167, in _build_wrapper
    output = self._build(*args, **kwargs)
  File "/home/floodsung/Documents/AC-for-image-captioning/sRNN_dnc/access.py", line 136, in _build
    write_weights = self._write_weights(inputs, prev_state.memory, usage)
  File "/home/floodsung/Documents/AC-for-image-captioning/sRNN_dnc/access.py", line 249, in _write_weights
    num_writes=self._num_writes)
  File "/home/floodsung/Documents/AC-for-image-captioning/sRNN_dnc/addressing.py", line 335, in write_allocation_weights
    allocation_weights.append(self._allocation(usage))
  File "/home/floodsung/Documents/AC-for-image-captioning/sRNN_dnc/addressing.py", line 401, in _allocation
    inverse_indices = util.batch_invert_permutation(indices)
  File "/home/floodsung/Documents/AC-for-image-captioning/sRNN_dnc/util.py", line 28, in batch_invert_permutation
    unpacked = tf.unstack(permutations)
  File "/home/floodsung/tensorflow_sonet/local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 960, in unstack
    raise ValueError("Cannot infer num from shape %s" % value_shape)
ValueError: Cannot infer num from shape (?, 30)

but if I change the batch size to any other integer >1, it works.
how to solve this problem ? thank you very much!

dynamic batch size not work

in util.py, tf.pack/tf.unpack are used to process the batch data, will it be better to use tf.scan for dynamic batch?
for example:

def batch_invert_permutation(permutations):
  """Returns batched `tf.invert_permutation` for every row in `permutations`."""
  with tf.name_scope('batch_invert_permutation', values=[permutations]):
    return tf.scan(lambda a, x: tf.invert_permutation(x), permutations)

def batch_gather(values, indices):
  """Returns batched `tf.gather` for every row in the input."""
  with tf.name_scope('batch_gather', values=[values, indices]):
    return tf.scan(lambda a, x: tf.gather(x[0], x[1]), (values, indices))

Orange python widget inputs and output?

I see people want to try this and the dnc.py could be used. So if it was added to orange as python widget in a python script widget. Then they could try datasets easiest?

How to make a placeholder with changed batches for the dnc state? thanks

I wonder to know how can I create placeholders like dnc_core.initial_state(batch_size=n) with the batch_size n changed in trainning.

dnc_core = dnc.DNC(...)
initial_core_state = dnc_core.initial_state(batch_size=1)
core_state_placeholders = snt.nest.map(lambda t: tf.placeholder(t.dtype, shape=t.shape),
initial_core_state)

When I create the graph,the shape of initial_state seems be fixed with the batch_size .

You can avoid the top_k and allow usage to be differentiable

Context

I have replicated the DNC which you show in theory in the Nature paper and in code your repository, with several modifications to addressing. In my case, I am using the framework of keras rather than sonnet. Originally, implementing the DNC as presented in the Nature paper lead to a fairly unstable model for some problems (initialization could have a huge impact on learn-ability). This lead me to reformulate each of the dynamic addressing mechanisms.

Enhancement

Here I request/point out an enhancement to the usage allocation weighting.
I have chosen to implement it without a sorting, which means you can remove this line.
This also allows the user to specify an inferrable batch_size. Pardon me if I'm mistaken, but I think it is impossible to have inferrable dimensions and use tf.unstack, without resorting to TensorArray's or dynamic partitioning.

This was done as follows (you can infer what the variables names are, and ignore self's as it is paste from some classes):

before write weights

free_weighting = K.prod(K.tile(1. - free_gates, (1,1,self.num_slots)) * w_read_tm1,1)
u_t = u_tm1 * free_weighting

get write weights

def _allocation_weights(u_t): 
        # (batch_size, num_slots)
        relative_usage = K.softmax(u_t)
        relative_non_usage = 1. - relative_usage
        relative_non_usage -= K.min(relative_non_usage)
        allocation_weights = K.softmax(relative_non_usage)
        return allocation_weights

# batch_size, W, num_slots
content_address_write = self._content_address(M_tm1, xi['write_keys'], xi['write_strengths'])
# batch_size, W, 1
write_gates = xi['write_gates']
# batch_size, W, 1
allocation_gates = xi['allocation_gates']
# batch_size, W, num_slots
tiled_write_gates = K.tile(xi['write_gates'] ,(1,1,self.num_slots))
# batch_size, W, num_slots
tiled_allocation_gates = K.tile(xi['allocation_gates'],(1,1,self.num_slots))

#write allocation weights
w_write = []
for w in range(self.num_write_heads):
    allocation_weights = self._allocation_weights(u_t)
    w_write.append(tiled_write_gates[:,w,:] * (tiled_allocation_gates[:,w,:] * allocation_weights + (1. - tiled_allocation_gates[:,w,:]) * content_address_write[:,w,:]))
    # update usage 
    u_t += w_write
w_write = K.stack(w_write,axis=1)

Intuition of change

The usage is better represented as an unbounded positive number of access times per slot rather than a number between 0 and 1. The free gates can reset these numbers as in the original implementation.
The allocation weighting then is a simple (albeit approximate) distribution over the relative non-usage.
It deviates from from the way a computer works (in that memory locations cannot be both used and unused on a computer), but it results in a smoother response to changes in memory access patterns. This approximation is counter-balanced by the fact that the write weights remain differentiable, and the sharpness of the allocation weights, as a result, remains quite nominal.

Result

In the problems I apply it too, I had noticeably faster training, and the allocation gates were slightly more often close to 1 (usage addressing preference).

Note: The faster learning might also be related with the temporal linking modifications also implemented.

System

Keras 2.0.8, tensorflow 1.3.0

GPU usage

hi there,
is it possible to detect if the DNC is using the GPU when is running the train.py
thanks

How to print the value of read/write head and memory?

Hi guys,
I am working this DNC and this is indeed a very interesting work. However I could not print the tensors and variables in this computation graph. It seems that most of the computation components are built through the "_build" function called by "dynamic_rnn". When I try to print the components in "_build" function, it gives me error saying "Operation %r has been marked as not fetchable". So I am not able to see what the DNC is doing in each step. I have tried several methods including using "tf.Print" and "tf.identity". May I know if there is a way to get over this issue? Thanks a lot!

Error running the repeat-copy example: 'TypeError: cell must be an instance of RNNCell'

Hello,
I am trying to test DNC on the repeat-copy task, just using the train.py code but I get the following error when I have to execute tf.nn.dynamic_rnn:

      output_sequence, _ = tf.nn.dynamic_rnn(cell=dnc_core,inputs=input_sequence,
                                 time_major=True,initial_state=initial_state)

'TypeError: cell must be an instance of RNNCell'

On the sonnet documentation, I saw that the new version of sonnet is such that "snt.RNNCore no longer inherits from tf.RNNCell". But then it continues: "All recurrent modules will continue to be suppoted by tf.dynamic_rnn, tf.static_rnn, etc.". So why the code does not actually work?

For the installation of sonnet I have followed roman3017's comment on google-deepmind/sonnet#5 (comment), but I see now that it was installing tensorflow 1.1 and the new version of sonnet requires tensorflow 1.2. Could that be the problem? How should I proceed to upgrade tensorflow as a submodule of sonnet? I precize that I have tensorflow with just CPU support.

Why the memory is a 3D tensor?

Dear author:
We notice that in the paper, the memory is a 2D matrix (cell number, memory size]), but in pratice, the memory is a 3D tensor ([batch size, cell number, memory size]). Maybe I ignore some import detail. Could you help me address my confusion? Thank you

Replacing sorted allocation with weighted softmax

Hi, I was wondering if the sorting non-differential part of the allocation mechanism was really necessary?
At least for the RepeatCopy it looks like replacing it with weighted softmax with strength 2 gives more stable and slightly better results and makes the network differentiable. Learning the strength parameter may even improve it as in the content-based addressing case.

I just comment out the line:
write_weights = tf.stop_gradient(write_weights)
and replaced:
return batch_gather(sorted_allocation, inverse_indices)
with
return weighted_softmax(nonusage, 2.0, tf.nn.softplus)

thanks

Stacked DNC

Is it possible to create a stacked DNC (add DNC on top of a DNC)? i'm struggling to understand the graph flow in the DNC implementation.

write to memory with multiple write heads

Hey,
So when there're multiple write heads, when writing to memory with these variables:

write_weights: [batch_size x num_write_heads x memory_size]
erase_vectors: [batch_size x num_write_heads x word_size]
write_vectors: [batsh_size x num_write_heads x word_size]
memory: [batch_size x memory_size x word_size]

the erase operation is by:

erase_gate = 
write_weights {reshape to: [batch_size x num_write_heads x memory_size x 1]} 
x 
erase_vectors {reshape to: [batch_size x num_write_heads x 1 x word_size]}
= shape: [batch_size x num_write_heads x memory_size x word_size]

then the 2nd dim is reduced by taking a product over this dimension.
While for the write operation following this erase, this 2nd dimention is reduced directly by the matmul:

add_matrix = 
write_weights {reshape to: [batch_size x memory_size x num_write_heads]} 
x 
write_vectors {reshape to: [batch_size x num_write_heads x word_size]}
= shape: [batch_size x memory_size x word_size]

Is this correct? Cos I didn't get this part from the paper and want to make sure I get it right. Thanks in advance!

Output vector inconsistent with the nature paper?

Hey,
In lines 118~121 of dnc.py, the final output is by passing the concatenated controller_output and access_output throught a Linear

    output = tf.concat([controller_output, batch_flatten(access_output)], 1)
    output = snt.Linear(
        output_size=self._output_size.as_list()[0],
name='output_linear')(output)

But according to the nature paper, in the part above Interface parameters in the left column of page 477, it is stated as: Finally, the output vector y_t is defined by adding v_t to a vector obtained by passing the concatenation of the current read vectors through the RWxY weight matrix W_r.

So should the output from the controller and the output from the read heads first be concatenated then passed through a Linear layer, or should the output from the read heads first be passed through a Linear layer then be concatenated with the output from the controller? Or am I misreading sth here?
Thanks in advance!

Graph datasets

Hi,

Are the Family Tree and London Underground datasets mentioned in the paper openly available? If so, where can I find them?

Thanks,
Thiviyan

Simple comparisons with other rnn core models

Hi Jack @dm-jrae,

on the front page it says,

More generally, the DNC class found within dnc.py can be used as a standard TensorFlow rnn core and unrolled with TensorFlow rnn ops, such as tf.nn.dynamic_rnn on any sequential task.

Be fun if you could demonstrate this for something very simple like the basic TF rnn tutorial. I guess the quickest way to start using the dnc is by replacing it in simple familiar applications.

Thanks, Aj

Mini-SHRDLU

I am very interested in this part of the experiments with Mini-SHRDLU, which inspired our current work. So, I would like to ask the author whether the code of this part of the experiment is open source?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.