Giter VIP home page Giter VIP logo

ponyge2's People

Contributors

aadeshnpn avatar alcides avatar cgarcia-uco avatar codykenb avatar dvpfagan avatar jmmcd avatar lbannenberg avatar luizvbo avatar mfjoneill avatar mikefenton avatar nbro avatar silver-golden avatar squeakus avatar t-h-e avatar ugultopu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ponyge2's Issues

TODO list

Is there a good place for a todo list. Or should Issues be used? (And great work improving the old code)

Fitness function-specific arguments

I have a fitness function which is really a family of functions, i.e. it is parameterised, so the natural way to write it is along these lines:

class MyFitness:
    def __init__(self, a, b, c):
        self.a = a
        # etc
    def __call__(self, x):
        # use self.a to control fitness evaluation

I think we discussed how to pass parameters like a in on the command line, without coming to a conclusion. One option would be a new parameter named, say, FITNESS_PARAMETERS, passed in a string, and then eval-ed or whatever inside the fitness function. That would lead to this type of code:

class MyFitness:
    def __init__(self):
        self.a, self.b, self.c = eval(parameters['FITNESS_PARAMETERS'])
        # etc

Then one would write on the command line:

$ ponyge.py --FITNESS_PARAMETERS "(3, 4, 5)"

I've implemented this and it works for me... does this make sense or does anyone have anything better?

Need to re-think subtree crossover

Subtree crossover accounts for about 50% of the run-time when used (60% if int_flip mutation is used). The vast majority of this stems from having to continually re-initialise individuals throughout the process. The reason we do this is to avoid having to do deepcopy on the recursive tree class. Basically, when two parents are selected from the parent population to have crossover performed on them, any operations performed on them will affect the original versions of those parents in the original parent population. When we sample parents from the parent population, we need to make fresh copies of them so that any changes made will not affect the original parents. We previously did this with deepcopy but it was incredibly slow. Now, we simply re-initialise new individuals using the genome of the original individuals. This is substantially faster than deepcopy, but still very slow. If we can think of a different way to do this that should make things much faster.

Crash when creating results directory

    # Generate save folders
    if not path.isdir(params['FILE_PATH']):
        mkdir(params['FILE_PATH'])

It's a race condition -- if we're running multiple runs at the same time, and the directory doesn't already exist, then several of the runs will reach this if and decide they have to make the directory, then get suspended, then one of them will succeed in making it, then the others will crash, because this mkdir crashes if its object directory already exists.

Fix: use a different mkdir I suppose?

Initialisation of duplicate individuals

Currently for ramped half-half initialisation we can produce duplicate phenotypes during the initialisation process. We start the ramping depth from a level where sufficient unique trees are known to exist, however we don't check how many unique "full" trees exist at each depth. The issue is that there could be sufficient unique trees at depth n, but insufficient unique full trees at depth n. Therefore, some duplicate trees/phenotypes may be observed during RHH initialisation.

Random number seed

It looks like it's not being set correctly, or is being overwritten, or there are two RNG instances, or something.

Need to add python program grammar examples

We need to add a basic example of how to write python code in a grammar. This will also require a fitness function to exec the solution rather than using eval (code -vs- expressions). With the new regular expression grammar parser we can now use nice looking multi-line code. Bear in mind that the old indentation/new line style of def func():{} has now changed to def func():{::}. We only need to create a simple example, nothing too fancy, but it showcases the capabilities of the method.

sequence_match problem dependencies are outside Anaconda

Our current policy is that all dependencies come straight from Anaconda. Options for a fix:

  • make a new version without the interesting fitness function (keep the existing version in a branch or in the mainline)

  • change policy to allow pip-installable dependencies -- when we have setup.py.

Parameter BNF_GRAMMAR should be called GRAMMAR_FILE

For consistency -- every other parameter has the same name: --lower-case-option translates to params['UPPER_CASE_OPTION'], but --bnf-grammar translates to params['GRAMMAR_FILE'].

I would just do it, but it will make all your old parameters.txt files unusable, i.e. before re-running an old experiment you'll need to s/BNF_GRAMMAR/GRAMMAR_FILE in the parameters.txt file.

We could keep the old name for compatibility, but document the new one -- any preferences?

`os.path` instead of manual filename/path manipulation

In stats.py and a few other places, there is some manual manipulation of paths, using things like path + "../stats" -- this should be done in a cross-platform way using os.path.join and other functions in os.path.

Need param to stop run if condition holds True?

Might add in a param to stop a run if a condition is true, e.g. if a string-match fitness gets to 0 (i.e. a perfect match), rather than just continuing on pointlessly for the required total number of generations.

Unit productions in tree operations

Need to look into the use of unit productions. Common consensus is that we should allow mutation and crossover in the tree at unit productions but not in linear operations.

sklearn-style interface for regression/classification

We should be able to provide a wrapper to allow this sklearn-style usage of our regression/classification:

import ponyge
reg = ponyge.ScikitLearnGERegressor(optimise_constants=True, generations=100)
reg.fit(X, y)
print(reg._formula) # prints out the individual
yhat_test = reg.predict(X_test)
print(y_test, yhat_test)

Phenotype is sometimes a list

The following parameters result in a best-of-run individual whose phenotype is a list, not a string.

'POPULATION_SIZE' : 3,
'GENERATIONS' : 1,
'RANDOM_SEED': 3

Other parameters as in the repo.

Multicore Evaluation

Just want to flag up that we will be adding multi core fitness evaluation, will update the thread as progress is made.

Dave

Steady State replacement can cause crossover to get stuck in an infinite loop

Since steady state replacement as it is now implemented selects pairs of individuals, performs crossover on them, then mutation, and then replaces them back into the original population, it is possible for crossover to get stuck in an infinite loop. For example, use the following config:

--parameters regression.txt --debug --replacement steady_state --population 50 --random_seed 245221

The problem is that crossover randomly selects pairs of parents from the parent population (i.e. from the population produced by selection). In the case of steady state replacement, this parent population is of size 2. Previously (e.g. with generational replacement), if there was an issue with crossover a new set of parents would be randomly selected from the parent population and things should work out ok as the parent population contained more than 2 parents. However, with steady state replacement, there are no alternative options to these chosen parents as there are only 2 parents in the entire parent pop. If two parents are selected who cannot create viable offspring, then the while loop in operators.crossover.crossover will simply continue indefinitely.

Experiment manager

An experiment manager exists with the python implementation, but isn't included yet in the repo. Will be added in future versions eventually.

"Correct" implementation of RHH/Sensible initialisation

In our current implementation of RHH/Sensible initialisation, the "Grow" component builds random trees up to a maximum specified depth. Importantly, we do not force any branch of the tree to reach that depth, meaning:

Grow produces trees of irregular shapes and sizes because of random selection from the functions and terminals set.

This is directly in line with Ryan & Azad's description of Sensible/RHH initialisation (above quote taken from that paper). However, there has apparently been some doubt over whether or not the "Grow" portion of RHH should force at least one branch of the tree to the specified maximum depth. We should clarify this issue and decide exactly what to do.

It is important to note that RHH initialisation is inherently flawed in GE due to the use of grammars, as noted in our paper on Position Independent Initialisation. Due to the randomness of the "Grow" component of RHH, GE will tend to produce very small trees if it isn't forced to do otherwise, regardless of the depth limit. This results in a high degree of duplication in the initialised population, meaning RHH in GE doesn't adequately cover the initial search space. This is also tied in very strongly with grammar design; more complex grammars mitigate this effect.

Crash when all individuals are invalid

In stats.py, we have lines like:

        available = [i for i in individuals if not i.invalid]

        # Genome Stats
        genome_lengths = [len(i.genome) for i in available]
        stats['max_genome_length'] = max(genome_lengths)

(Also min, and ave.) These will throw an error if all individuals in a population are invalid. I've seen this a few times in the hill-climbing search loop, but not in the default search loop. Maybe there's some protection against all becoming invalid when running the default search loop?

"Best Ever" individual can't be stored in stats dictionary

The stats dictionary should be reserved for statistics only. At present, the best_ever individual is stored in the stats dictionary. As the stats file is a .tsv (tab-separated), if the phenotype of the best_ever individual contains a tab, it will screw up the entire stats file. Case in point: run the new pymax problem and look at the stats file.

I propose we create a new tracker for best_ever and store it (obviously) in utilities.stats.trackers. This way we avoid such issues.

State of the art regression/classification

For near-state of the art performance on regression and classification, I think we need:

  • Gradient descent on constants, i.e. the grammar creates a constant called C0, C1, etc rather than a literal constant, and then we use a gradient descent method to optimise the constants per-individual, with CMA-ES, BFGS, or similar

  • Multi-objective optimisation using NSGA2 or similar

  • Auto-simplification using Sympy or similar.

I have code for these, I will try to add it eventually.

Issues with the current setup and experiment managers

There are issues with the current setup if an experiment manager in the same style is to be used. When PonyGE is run directly from the command line then there are no issues as all of the imports happen afresh each time the program is run. However, if a python experiment manager is used whereby the mane() function is called in PonyGE, then all of the imports only happen once. While this saves time on multiple runs, it also means that the stats and params dictionaries (and the trackers lists) are only initialised once and thus are not cleanly initialised each time the mane() program is executed. Either these lists need to be initialised by a function call (they are currently initialised during the import process) or a python experiment manager can't be used without further messing...

Incorporate a scripts folder

Should we add a folder on the dataset and src level called scripts which can house the stats parser and possibly an experiment manager?

Mapper speed-up?

When profiling running the default settings of ponyge.py in PyCharm (as python ponyge.py) it seemed like mapper was taking a relatively long time.

Changing some list to collections.deque and using a generator expression seemed to improve the speed (sorry, changed two things at once, so not sure which gave the most speed up, but seems like the generator expression is the one which reduces the speed the most). Below is the suggested patch.

From 5263e12 Mon Sep 17 00:00:00 2001
From: Erik Hemberg [email protected]
Date: Wed, 27 Jul 2016 09:44:49 -0400
Subject: [PATCH] Mapper speed-up. Change list to collections.deque and used
generator expression


 src/algorithm/mapper.py | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/src/algorithm/mapper.py b/src/algorithm/mapper.py
index c08e186..f652f29 100644
--- a/src/algorithm/mapper.py
+++ b/src/algorithm/mapper.py
@@ -1,6 +1,7 @@
 from algorithm.parameters import params
 from random import choice, randrange
 from representation.tree import Tree
+from collections import deque


 def genome_map(_input, max_wraps=0):
@@ -10,8 +11,9 @@ def genome_map(_input, max_wraps=0):
     from utilities.helper_methods import python_filter
     # Depth, max_depth, and nodes start from 1 to account for starting root
     used_input, current_depth, current_max_depth, nodes = 0, 1, 1, 1
-    wraps, output, production_choices = -1, [], []
-    unexpanded_symbols = [(params['BNF_GRAMMAR'].start_rule, 1)]
+    wraps, output, production_choices = -1, deque(), []
+    unexpanded_symbols = deque([(params['BNF_GRAMMAR'].start_rule,
+                                            1)])

     while (wraps < max_wraps) and \
             (len(unexpanded_symbols) > 0) and \
@@ -23,7 +25,7 @@ def genome_map(_input, max_wraps=0):
             wraps += 1

         # Expand a production
-        current_item = unexpanded_symbols.pop(0)
+        current_item = unexpanded_symbols.popleft()
         current_symbol, current_depth = current_item[0], current_item[1]
         if current_max_depth < current_depth:
             current_max_depth = current_depth
@@ -38,9 +40,8 @@ def genome_map(_input, max_wraps=0):
             # Use an input
             used_input += 1
             # Derviation order is left to right(depth-first)
-            children = []
-            for prod in production_choices[current_production]:
-                children.append([prod, current_depth + 1])
+            children = deque()
+            (children.append([prod, current_depth + 1]) for prod in production_choices[current_production])

             NT_kids = [child for child in children if child[0][1] == "NT"]
             if any(NT_kids):

2.6.2

Crash in RHH initialisation

(I've just pushed some code for a sequence-match problem, please pull it to see the problem below.)

The following runs ok:

python ponyge.py --bnf_grammar sequence_match.pybnf --fitness_function sequence_match --target "[0, 5, 0, 5, 0, 5]" --extra_fitness_parameters "alpha=0.5, beta=0.5, gamma=0.5" --generations 10 --population 10 --initialisation operators.initialisation.random_init

But the following crashes:

python ponyge.py --bnf_grammar sequence_match.pybnf --fitness_function sequence_match --target "[0, 5, 0, 5, 0, 5]" --extra_fitness_parameters "alpha=0.5, beta=0.5, gamma=0.5" --generations 10 --population 10

The difference is just in the init operator (random_init versus rhh).

Traceback (most recent call last):
  File "ponyge.py", line 33, in <module>
    mane()
  File "ponyge.py", line 22, in mane
    individuals = params['SEARCH_LOOP']()
  File "/Users/jmmcd/Documents/vc/PonyGE2/src/algorithm/search_loop.py", line 16, in search_loop
    individuals = params['INITIALISATION'](params['POPULATION_SIZE'])
  File "/Users/jmmcd/Documents/vc/PonyGE2/src/operators/initialisation.py", line 79, in rhh
    ind = generate_ind_tree(depth, "full")
  File "/Users/jmmcd/Documents/vc/PonyGE2/src/operators/initialisation.py", line 123, in generate_ind_tree
    0, 0, 0, max_depth - 1)
  File "/Users/jmmcd/Documents/vc/PonyGE2/src/representation/tree.py", line 296, in generate_tree
    nodes, depth, max_depth, depth_limit - 1)
  File "/Users/jmmcd/Documents/vc/PonyGE2/src/representation/tree.py", line 264, in generate_tree
    chosen_prod = choice(available)
  File "/Users/jmmcd/anaconda3/lib/python3.5/random.py", line 255, in choice
    raise IndexError('Cannot choose from an empty sequence')
IndexError: Cannot choose from an empty sequence```

Production separators on multiple lines

I have an edit of the read_bnf for production separators on multiple lines. If you want to accept it you can see if it passes test you write (works on my computer...) (I find it a lot easier to read grammars were at least each production can be on a separate line)

def read_bnf_file(self, file_name):
"""Read a grammar file in BNF format.

    :param file_name: BNF grammar file
    :type file_name: str

    """
    assert file_name.endswith('.bnf')

    rule_separator = "::="
    # Don't allow space in NTs, and use lookbehind to match "<"
    # and ">" only if not preceded by backslash. Group the whole
    # thing with capturing parentheses so that split() will return
    # all NTs and Ts. TODO does this handle quoted NT symbols?
    non_terminal_pattern = r"((?<!\\)<\S+?(?<!\\)>)"
    # Use lookbehind again to match "|" only if not preceded by
    # backslash. Don't group, so split() will return only the
    # productions, not the separators.
    production_separator = r"(?<!\\)\|"
    lhs = None

    # Read the grammar file
    for line in open(file_name, 'r'):
        if not line.startswith("#") and \
                        line.strip() != "":
            # Split rules.
            has_rule_separator = line.find(rule_separator) > 0
            if has_rule_separator:
                lhs, productions = line.split(rule_separator, 1)  # 1 split
                lhs = lhs.strip()
                if not re.search(non_terminal_pattern, lhs):
                    raise ValueError("lhs is not a NT:", lhs)

                self.non_terminals.add(lhs)
                if self.start_rule is None:
                    self.start_rule = (lhs, self.NT)

            else:
                productions = line.strip()

            # Find terminals and non-terminals
            tmp_productions = []
            # TODO make handling of multilines nicer, some edge cases are
            # passing, e.g. line ends with | and starts with |
            production_split = re.split(production_separator, productions)
            for production in production_split:
                production = production.strip().replace(r"\|", "|")
                tmp_production = []
                for symbol in re.split(non_terminal_pattern,
                                       production):
                    symbol = symbol.replace(r"\<", "<").replace(r"\>",
                                                                ">")
                    if len(symbol) == 0:
                        continue
                    elif re.match(non_terminal_pattern, symbol):
                        tmp_production.append((symbol, self.NT))
                    else:
                        self.terminals.add(symbol)
                        tmp_production.append((symbol, self.T))

                if tmp_production:
                    tmp_productions.append(tmp_production)

            assert lhs is not None, "No lhs"

            # Create a rule
            if lhs not in self.rules:
                self.rules[lhs] = tmp_productions
            else:
                if len(production_split) > 1 or\
                        last_character == "|":
                    self.rules[lhs].extend(tmp_productions)

            # Remember the last charachter of the line
            last_character = productions[-1]

        print(line,)

    for k, v in self.rules.items():
        print("%s; %s" % (k, v))

We shouldn't quit

We use quit in a few places, but we probably shouldn't, for two reasons:

total_inds != gens * popsize

With --generations 10 --population 50 we get total_inds : 550. I guess this could be on purpose because it means initialisation is gen 0, and then we go up to generation 10, meaning we actually have 11 * 50. But we should have total_inds = gens * pop size. Maybe initialisation should be gen 1?

Need to fill out the documentation

We need to fill out the documentation as much as possible. For those of us using PyCharm we should follow the standard function documentation template they provide. Simply type """ (three "s) and press enter and it will auto-complete the doc template for you, complete with params for inputs and returns. We should try to fill these out as completely as possible for EVERY function in the whole program.

The Readme file also needs to be as verbose as possible. I've structured and arranged it to make it as readable as possible, but we still need to give a full runthrough of the entire program and the various relationships between the different functions in it. People completely new to the whole area should be able to understand the entire program by going through the readme and the documentation for the various functions.

Setting default fitness function to `supervised_learning` in the params dict and not using parameters file gives error.

If the fitness function in the params dictionary is set to supervised_learning and no parameters file is used, the default error_metric is None, which throws an error when it is called. I have changed the default fitness func in the params dict to regression, which skirts this issue as the regression and classification fitness functions both set the correct error metrics in their respective inits, assuming the error metric is None already. This issue is simply a result of poor configuration choices, and is something that needs to be highlighted in the README.

Parsing of bnf file removes quotes

  • Is this this expected behavior and documented and I failed reading it :-) ? (I was surprised that the parsing removed the quotes)

  • Below is my test
    test_grammar.py

import unittest
import representation.grammar as bnf_grammar

class TestGrammar(unittest.TestCase):

    def test_read_bnf_file(self):
        """Non hypothesis test, since we want to check known files...
        TODO write
        """
        grammar = bnf_grammar.Grammar('../tests/test.bnf')
        expected_terminals = [u'{"Ns": [3, ', u']}', u', ', u'2', u'0', u'1']
        print(expected_terminals)
        for expected_terminal in expected_terminals:
            self.assertIn(expected_terminal, grammar.terminals,
                          "%s not in %s" % (expected_terminal, grammar.terminals))
        
        
if __name__ == '__main__':
    suite = unittest.TestLoader().loadTestsFromTestCase(TestGrammar)
    unittest.TextTestRunner(verbosity=2).run(suite)

Grammar is test.bnf

<src> ::= {"Ns": [3, <es>]}
<es> ::= <ds>, <ds>, <ds> | <ds>, <ds>, <ds>, <ds>
<ds> ::= 1<d> | 2<d>
<d> ::= 0 | 1 

Output is:

erikhemberg@30-103-115:~/Documents/code/PonyGE2/src$ pytest ../tests/test_grammar.py -s
============================================================================================== test session starts ==============================================================================================
platform darwin -- Python 2.7.8, pytest-3.0.3, py-1.4.31, pluggy-0.4.0
rootdir: /Users/erikhemberg/Documents/code/PonyGE2, inifile:
plugins: hypothesis-3.1.0, cov-2.4.0
collected 1 items

../tests/test_grammar.py ('Warning: Grammar contains unit production for production rule', '<src>')
       Unit productions consume GE codons.
[u'{"Ns": [3, ', u']}', u', ', u'2', u'0', u'1']
F

=================================================================================================== FAILURES ====================================================================================================
________________________________________________________________________________________ TestGrammar.test_read_bnf_file _________________________________________________________________________________________

self = <test_grammar.TestGrammar testMethod=test_read_bnf_file>

    def test_read_bnf_file(self):
        grammar = bnf_grammar.Grammar('../tests/test.bnf')
        expected_terminals = [u'{"Ns": [3, ', u']}', u', ', u'2', u'0', u'1']
        print(expected_terminals)
        for expected_terminal in expected_terminals:
            self.assertIn(expected_terminal, grammar.terminals,
>                         "%s not in %s" % (expected_terminal, grammar.terminals))
E           AssertionError: {"Ns": [3,  not in [u'{Ns: [3, ', u']}', u', ', u', ', u', ', u', ', u', ', u'1', u'2', u'0', u'1']

../tests/test_grammar.py:15: AssertionError
=========================================================================================== 1 failed in 0.07 seconds ============================================================================================

Allow for saving and reloading of state

Periodically save the state of a run, e.g. entire population, current random state, population stats, cache. This should allow for runs to be re-loaded from a specific point if they crash (helps with really long runs). Easiest to save all info and data structures as a .mat file, can then re-load everything quickly and easily.

Post-run evaluation

We already have the ability to do train/test fitness in our stats. It would be good to expand this, to report other measures of success, whether inside stats, or as a post-run thing.

For example in classification we might use hinge loss during evolution but f-score for post-run evaluation.

GENOME_INIT is redundant?

GENOME_INIT seems redundant since we have INITIALISATION? GENOME_INIT is used only once, in mapper.py, when generating an individual from scratch. Maybe mapper.py should be calling the initialisation as specified in params['INITIALISATION'] instead of its own random-genome code.

Steady-state replacement is not really steady-state

Our steady-state replacement just takes an old and new population and sorts, keeping the best. That's not the same as a steady-state algorithm, where one selects 2 individuals at a time, carries out mutation and/crossover, and inserts them to the population, replacing some individuals. It would be better to rename this operator, if we're keeping it.

Python style

We should aim for approximately pep8 in general. In particular, class names should be CamelCase.

Subtree Crossover

We need to update the crossover to take into account depth limits and how to handle individuals that are beyond our depth limits.

setup.py, PyPI, etc

It would be nice to have the standard Python packaging stuff (setup.py, MANIFEST.in, etc) and thus be able to push to PyPI. I started work on this but it wasn't trivial so I'm postponing and making an issue for now.

Multicore evaluation doesn't work on Windows OS

For some reason, multicore evaluation on Windows machines reverts the params dictionary back to the default state (i.e. the default params dictionary specified in src/algorithm/parameters.py) within src/representation/individiaul/Individiaul.evaluate. This means that when Individual.evaluate tries to execute params['FITNESS_FUNCTION'] it is trying to execute a string rather than a class object. This only happens on Windows machines when --multicore is used.

Additional Problems

Hi Guys, so the issue now is what problems do we include with ponyge. Would someone care to add some more problems? Or suggest problems to use. I know a lot of the original problems needs to be added.

Dave

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.