Giter VIP home page Giter VIP logo

sklearn-deap's Introduction

sklearn-deap

Use evolutionary algorithms instead of gridsearch in scikit-learn. This allows you to reduce the time required to find the best parameters for your estimator. Instead of trying out every possible combination of parameters, evolve only the combinations that give the best results.

Here is an ipython notebook comparing EvolutionaryAlgorithmSearchCV against GridSearchCV and RandomizedSearchCV.

It's implemented using deap library: https://github.com/deap/deap

Install

To install the library use pip:

pip install sklearn-deap

or clone the repo and just type the following on your shell:

python setup.py install

Usage examples

Example of usage:

import sklearn.datasets
import numpy as np
import random

data = sklearn.datasets.load_digits()
X = data["data"]
y = data["target"]

from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold

paramgrid = {"kernel": ["rbf"],
             "C"     : np.logspace(-9, 9, num=25, base=10),
             "gamma" : np.logspace(-9, 9, num=25, base=10)}

random.seed(1)

from evolutionary_search import EvolutionaryAlgorithmSearchCV
cv = EvolutionaryAlgorithmSearchCV(estimator=SVC(),
                                   params=paramgrid,
                                   scoring="accuracy",
                                   cv=StratifiedKFold(n_splits=4),
                                   verbose=1,
                                   population_size=50,
                                   gene_mutation_prob=0.10,
                                   gene_crossover_prob=0.5,
                                   tournament_size=3,
                                   generations_number=5,
                                   n_jobs=4)
cv.fit(X, y)

Output:

    Types [1, 2, 2] and maxint [0, 24, 24] detected
    --- Evolve in 625 possible combinations ---
    gen	nevals	avg     	min    	max
    0  	50    	0.202404	0.10128	0.962716
    1  	26    	0.383083	0.10128	0.962716
    2  	31    	0.575214	0.155259	0.962716
    3  	29    	0.758308	0.105732	0.976071
    4  	22    	0.938086	0.158041	0.976071
    5  	26    	0.934201	0.155259	0.976071
    Best individual is: {'kernel': 'rbf', 'C': 31622.776601683792, 'gamma': 0.001}
    with fitness: 0.976071229827

Example for maximizing just some function:

from evolutionary_search import maximize

def func(x, y, m=1., z=False):
    return m * (np.exp(-(x**2 + y**2)) + float(z))

param_grid = {'x': [-1., 0., 1.], 'y': [-1., 0., 1.], 'z': [True, False]}
args = {'m': 1.}
best_params, best_score, score_results, _, _ = maximize(func, param_grid, args, verbose=False)

Output:

best_params = {'x': 0.0, 'y': 0.0, 'z': True}
best_score  = 2.0
score_results = (({'x': 1.0, 'y': -1.0, 'z': True}, 1.1353352832366128),
 ({'x': -1.0, 'y': 1.0, 'z': True}, 1.3678794411714423),
 ({'x': 0.0, 'y': 1.0, 'z': True}, 1.3678794411714423),
 ({'x': -1.0, 'y': 0.0, 'z': True}, 1.3678794411714423),
 ({'x': 1.0, 'y': 1.0, 'z': True}, 1.1353352832366128),
 ({'x': 0.0, 'y': 0.0, 'z': False}, 2.0),
 ({'x': -1.0, 'y': -1.0, 'z': False}, 0.36787944117144233),
 ({'x': 1.0, 'y': 0.0, 'z': True}, 1.3678794411714423),
 ({'x': -1.0, 'y': -1.0, 'z': True}, 1.3678794411714423),
 ({'x': 0.0, 'y': -1.0, 'z': False}, 1.3678794411714423),
 ({'x': 1.0, 'y': -1.0, 'z': False}, 1.1353352832366128),
 ({'x': 0.0, 'y': 0.0, 'z': True}, 2.0),
 ({'x': 0.0, 'y': -1.0, 'z': True}, 2.0))

sklearn-deap's People

Contributors

chengs avatar climbsrocks avatar creatorzzy avatar darkdreamingdan avatar dongchirua avatar ericjster avatar ikamensh avatar kikones34 avatar kootenpv avatar olologin avatar rsteca avatar ryanpeach avatar testpersonal avatar u1234x1234 avatar yasserfarouk avatar zickyzazang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sklearn-deap's Issues

It stucks

After an initial fit in a few iterations strang things occur.

1 bad variants (with the lowest fitness) reappear after were eliminated.
2 neither maximum fitness raises, nor the average one or the minimum one.
in my case it got its maximum fitnese in a single generation, and the rest 99 were just eating my money for just accelerating global warming and thermal death of the universe.

I'm not familiar with deap but I wonder if
1 should mating instances choice probability be a function of their fitnesses?
2 isn't there a bug in mating function? Shouldn't the variables be saved and the new ones be generated from the saved ones. I mean shouldnt mating function do assignments in the same time rather sequentally. Because now it first chooses a random number from the interval and then chooses another random number in between the first random number and interval boundary.
3 shouldn't they be sampled not from integers (both mutation and mating)?
4 shouldn't mated instances produce 2*<number of attrs> children and themselves, which are not only randomly sampled, but also the average of every attr of a pair preserving the rest of?
5 shouldn't the worst instances be eliminated from population?
6 shouldn't it take distributions, like RandomizedSearchCV rather than instances and sample from the distributions

I'm not going to research this topic for now, I have a lot of work to do and it's your library anyway.

AttributeError: can't set attribute

This is quite puzzling.
macos 10.9.5,
python 3.5.2
with anaconda and spyder

My code:

import sklearn
import numpy
from sklearn.model_selection import KFold
import sklearn.neural_network
import sklearn.svm
import sklearn.ensemble
import sklearn.datasets
import time
from evolutionary_search import EvolutionaryAlgorithmSearchCV

dataset=sklearn.datasets.load_iris()
X=dataset.data
Y=dataset.target
seed=1
test_size = int( 0.2 * len( Y ) )
numpy.random.seed( seed )
indices = numpy.random.permutation(len(X))
X_train = X[ indices[:-test_size]]
Y_train = Y[ indices[:-test_size]]
X_test = X[ indices[-test_size:]]
Y_test = Y[ indices[-test_size:]]

network=sklearn.svm.SVC()
lb=[-15,-5]
ub=[3,15]
space_num=[]
space_points=[0]*len(lb)

for i in range(len(lb)):
    space_num.append(int(ub[i]-lb[i]+1))    
    space_points[i]=numpy.logspace(lb[i],ub[i],space_num[i],base=2)
param_grid={'gamma':space_points[0],'C':space_points[1]}

score=sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score)
clf=EvolutionaryAlgorithmSearchCV(network,params=param_grid,cv=3,scoring=score,verbose=2,n_jobs=1)
time_begin=time.time()
clf.fit(X_train,Y_train)
time_end=time.time()-time_begin

print(time_end,sklearn.metrics.accuracy_score(clf.predict(X_test),Y_test))

I get the following error:
File "~/anaconda/lib/python3.5/site-packages/evolutionary_search/cv.py", line 301, in init
self.best_score_ = None

AttributeError: can't set attribute

Any idea?

Add return_train_score flag

Hello,

I'd would be great to have the "return_train_score" flag available to measure bias/variance of the fits.

Thank you,

Angel

NSGA and SPEA2

Hello, any plans to implement DEAP's NSGA and SPEA algorithms? I tried to branch your code and modify it with no avail.

Thanks!

First generation is ignored

A duplicate of #27 but hopefully easier to understand with a reproducible code block.

to reproduce, simply set generations_number=1 in the test.ipynb notebook.

when you do, you'll see the following:
image

there are a number of things to note here (presumably all coming from the same root error of the first generation being skipped)

  1. despite generations_number=1, it's actually run for 2 generations
  2. the .cv_results_ table only has 22 rows- the number of evaluations that were in the second generation. it should have at least 50 rows (the number of evaluations in the first generation)
  3. there's an inconsistent error being thrown: AttributeError: 'EvolutionaryAlgorithmSearchCV' object has no attribute 'best_score_'. i haven't yet figured out when this error is thrown and when it is not, but given that nobody else has complained about it, i'm hoping that it's also related to this issue of just having 1 generation. it does work sometimes though, even with just 1 generation
  4. the information for the 0th generation does appear to be saved and available for the statement that's printing "best individual with fitness X", even when it's not available in .cv_results_. see the screenshot below
    image
    it's also worth noting on this same point that i have sometimes seen .best_score_ print out results that only report the best score from the second generation (which would be 0.923205 in this one-off example i created just for point number 4).

the full .cv_results_ table is below. interestingly, it seems to have some awareness that 50 other rows should be present (the index column starts at 51), but there are only 22 rows present:
image

i'm hoping this is all an easily-fixed off-by-one error somewhere.

Tests no longer pass with newest sklearn

Running the tests with the scikit-learn v0.21.1 yields this result

======================================================================
ERROR: test_cv (__main__.TestEvolutionarySearch)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 48, in test_cv
    try_with_params()
  File "test.py", line 40, in try_with_params
    cv = readme()
  File "test.py", line 32, in readme
    generations_number=5)
  File "/build/source/evolutionary_search/cv.py", line 297, in __init__
    error_score=error_score)
TypeError: __init__() got an unexpected keyword argument 'fit_params'

Indeed, they did remove this argument, as shown here:
scikit-learn/scikit-learn#13519

What's wrong with my datas ?

With the following code 👍
paramgrid = {"n_jobs": -1,
"max_features":['auto','log2'],
"n_estimators":[10,100,500,1000],
"min_samples_split" : [2,5,10],
"max_leaf_nodes" : [1,5,10,20,50] }

#min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None

cv = EvolutionaryAlgorithmSearchCV(estimator=RandomForestClassifier(),
params=paramgrid,
scoring="accuracy",
cv=StratifiedKFold(y, n_folds=10),
verbose=True,
population_size=50,
gene_mutation_prob=0.10,
tournament_size=3,
generations_number=10
)

cv.fit(X, y)

and having the followning error :

TypeErrorTraceback (most recent call last)
in ()
20 )
21
---> 22 cv.fit(X,y)

/root/anaconda2/lib/python2.7/site-packages/evolutionary_search/init.pyc in fit(self, X, y)
276 self.best_params_ = None
277 for possible_params in self.possible_params:
--> 278 self.fit(X, y, possible_params)
279 if self.refit:
280 self.best_estimator
= clone(self.estimator)

/root/anaconda2/lib/python2.7/site-packages/evolutionary_search/init.pyc in _fit(self, X, y, parameter_dict)
301 toolbox = base.Toolbox()
302
--> 303 name_values, gene_type, maxints = _get_param_types_maxint(parameter_dict)
304 if self.gene_type is None:
305 self.gene_type = gene_type

/root/anaconda2/lib/python2.7/site-packages/evolutionary_search/init.pyc in _get_param_types_maxint(params)
33 types = []
34 for _, possible_values in name_values:
---> 35 if isinstance(possible_values[0], float):
36 types.append(param_types.Numerical)
37 else:

TypeError: 'int' object has no attribute 'getitem'

ZeroDivisionError: float division by zero

Anybody got this error before?
`/home/ubuntu/anaconda2/envs/python3/lib/python3.5/site-packages/evolutionary_search/init.py in _evalFunction(individual, name_values, X, y, scorer, cv, iid, fit_params, verbose, error_score)
98 score += _score
99 n_test += 1
--> 100 score /= float(n_test)
101 score_cache[paramkey] = score
102

ZeroDivisionError: float division by zero`

Here is my EASC call:

gscv = EvolutionaryAlgorithmSearchCV(estimator=KnnDtw(), params=param_grid, scoring="f1_weighted", cv=StratifiedKFold(n_splits=10, shuffle=False, random_state=SEED).split(X["eda"], y), verbose=1, population_size=50, gene_mutation_prob=0.10, gene_crossover_prob=0.5, tournament_size=3, generations_number=5, n_jobs=-1)

best_score_ and best_params_ never get updated when error score is negative

Some Sklearn scoring functions return negative error scores (e.g. mean absolute error) to stick to the "bigger is better" paradigm. However, with this error score the following code block never is evaluated True

if current_best_score_ > self.best_score_: self.best_score_ = current_best_score_ self.best_params_ = current_best_params_

and the best_score, and more importantly the best_params stay -1 and None respectively.

Setting self.best_score_ with -inf or with a much smaller value would sort of solve the issue. Setting best_score with current_best_score in the first generation would be a better solution imo

Can't instantiate abstract class EvolutionaryAlgorithmSearchCV with abstract methods _run_search

I use Python 2.7.15 to run test.py and I found an error TypeError: Can't instantiate abstract class EvolutionaryAlgorithmSearchCV with abstract methods _run_search

Would you please help correct anything I missed, bellow is all packages I installed

Package                            Version
---------------------------------- -----------
appdirs                            1.4.3
appnope                            0.1.0
asn1crypto                         0.24.0
attrs                              18.2.0
Automat                            0.7.0
backports-abc                      0.5
backports.shutil-get-terminal-size 1.0.0
bleach                             2.1.4
certifi                            2018.8.24
cffi                               1.11.5
configparser                       3.5.0
constantly                         15.1.0
cryptography                       2.3.1
Cython                             0.28.5
deap                               1.2.2
decorator                          4.3.0
entrypoints                        0.2.3
enum34                             1.1.6
functools32                        3.2.3.post2
futures                            3.2.0
html5lib                           1.0.1
hyperlink                          18.0.0
idna                               2.7
incremental                        17.5.0
ipaddress                          1.0.22
ipykernel                          4.10.0
ipython                            5.8.0
ipython-genutils                   0.2.0
ipywidgets                         7.4.2
Jinja2                             2.10
jsonschema                         2.6.0
jupyter                            1.0.0
jupyter-client                     5.2.3
jupyter-console                    5.2.0
jupyter-core                       4.4.0
MarkupSafe                         1.0
mistune                            0.8.3
mkl-fft                            1.0.6
mkl-random                         1.0.1
nbconvert                          5.3.1
nbformat                           4.4.0
notebook                           5.6.0
numpy                              1.15.2
pandas                             0.23.4
pandocfilters                      1.4.2
pathlib2                           2.3.2
pexpect                            4.6.0
pickleshare                        0.7.4
pip                                10.0.1
prometheus-client                  0.3.1
prompt-toolkit                     1.0.15
ptyprocess                         0.6.0
pyasn1                             0.4.4
pyasn1-modules                     0.2.2
pycparser                          2.19
Pygments                           2.2.0
pyOpenSSL                          18.0.0
python-dateutil                    2.7.3
pytz                               2018.5
pyzmq                              17.1.2
qtconsole                          4.4.1
scandir                            1.9.0
scikit-learn                       0.20.0
scipy                              1.1.0
Send2Trash                         1.5.0
service-identity                   17.0.0
setuptools                         40.2.0
simplegeneric                      0.8.1
singledispatch                     3.4.0.3
six                                1.11.0
sklearn-deap                       0.2.2
terminado                          0.8.1
testpath                           0.3.1
tornado                            5.1.1
traitlets                          4.3.2
Twisted                            17.5.0
wcwidth                            0.1.7
webencodings                       0.5.1
wheel                              0.31.1
widgetsnbextension                 3.4.2
zope.interface                     4.5.0

Minimizing a loss function

Line 305 in cv.py, is there anyway we can change the weights tuple to minimize a loss function rather than always maximize ?

ValueError when calling cv.fit() for optimising a neural network

Hi,

I am trying to optimise a neural network (Keras, TensorFlow), but I'm getting an error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

I have checked my input data for NaNs, infities and large or small values. There aren't any.
I have forced the input data to be np.float32 before passing it to .fit().

I've used this algorithm before without any problems or special data prep, so I'm not sure where there error is creeping in.

the relavent bit of the code is:
codetxt.txt

I should also say that when I manually try to just .fit() to my model, it works fine. The issue is something to do with how the cross valdation is working.

The full traceback is:

Traceback (most recent call last):
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/evolutionary_search/cv.py", line 104, in _evalFunction
error_score=error_score)[0]
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 568, in _fit_and_score
test_scores = _score(estimator, X_test, y_test, scorer, is_multimetric)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 610, in _score
score = scorer(estimator, X_test, y_test)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/metrics/scorer.py", line 98, in call
**self._kwargs)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/metrics/regression.py", line 239, in mean_squared_error
y_true, y_pred, multioutput)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/metrics/regression.py", line 77, in _check_reg_targets
y_pred = check_array(y_pred, ensure_2d=False)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/utils/validation.py", line 573, in check_array
allow_nan=force_all_finite == 'allow-nan')
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/sklearn/utils/validation.py", line 56, in _assert_all_finite
raise ValueError(msg_err.format(type_err, X.dtype))
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "NN_GSCV-DL2.py", line 308, in
grid_result = cv.fit(X_train, y_train)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/evolutionary_search/cv.py", line 363, in fit
self._fit(X, y, possible_params)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/evolutionary_search/cv.py", line 453, in _fit
halloffame=hof, verbose=self.verbose)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/site-packages/deap/algorithms.py", line 150, in eaSimple
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/users/hf832176/.conda/envs/tb_env6/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

Not selecting the best fitness

I noticed this when I was running the genetic algorithm (GA) for an estimator constructed with Pipeline.

After 5 generations, the max of the max column is 0.80597, but the GA has selected 0.7810945273631841 as the best fitness.

Types [1, 1, 1] and maxint [0, 0, 3] detected
--- Evolve in 4 possible combinations ---
gen	nevals	avg     	min     	max     
0  	50    	0.690866	0.604975	0.776119
1  	36    	0.713134	0.656716	0.776119
2  	29    	0.720537	0.646766	0.781095
3  	30    	0.733602	0.655721	0.781095
4  	32    	0.742299	0.641791	0.800995
5  	27    	0.748507	0.635323	0.80597 
Best individual is: {'model': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_split=1e-07, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=10, n_jobs=1, oob_score=False, random_state=None,
            verbose=0, warm_start=False), 'dimTransform': SelectFromModel(estimator=LinearSVC(C=0.01, class_weight=None, dual=False, fit_intercept=True,
     intercept_scaling=1, loss='squared_hinge', max_iter=1000,
     multi_class='ovr', penalty='l1', random_state=None, tol=0.0001,
     verbose=0),
        prefit=False, threshold=None), 'model__n_estimators': 10}
with fitness: 0.7810945273631841
Scoring evaluations: 0, Cache hits: 0, Total: 0

My code is below for reference:

def getModels(stepName):
    return [
        {
            stepName: [DummyClassifier()],
            prefix + 'strategy': ['stratified', 'most_frequent'],
        },
        {
            stepName: [SVC()],
            prefix + 'C': [10 ** i for i in range(0, 4)],
            prefix + 'gamma': [10 ** -i for i in range(0, 7)],
            prefix + 'kernel': ['rbf'],
        },
        {
            stepName: [RandomForestClassifier()],
            prefix + 'n_estimators': [5, 10, 20, 40],
        },
        {
            stepName: [KNeighborsClassifier()],
            prefix + 'n_neighbors': [2, 4, 6, 8, 10, 15, 20],
            prefix + 'weights': ['uniform', 'distance']
        },
        {
            stepName: [MLPClassifier(max_iter=2000)],
            prefix + 'hidden_layer_sizes': [
                (100,),
                (100, 50, 25),
                (50, 25, 10),
                (5, 5, 5),
                (50,)
            ],
            prefix + 'activation': [
                'logistic', 'tanh', 'relu'
            ],
            prefix + 'alpha': [
                10 ** -4, 10 ** -3, 10 ** -2
            ]
        },
    ]

def getDimTransformers(stepName, numFeatures):
    prefix = stepName + '__'

    maxExp2 = math.ceil(math.log(numFeatures, 2)) + 1
    numFeats = [
        2 ** i if i < maxExp2 - 1 else
        numFeatures
        for i in range(1, maxExp2)
    ]

    maxVal = min(numFeatures, 32)
    maxExp2 = math.ceil(math.log(maxVal, 2)) + 1
    numComponents = [
        2 ** i if i < maxExp2 - 1 else
        numFeatures
        for i in range(1, maxExp2)
    ]

    lsvc = LinearSVC(C=0.01, penalty="l1", dual=False)
    return [
        # {
        #     stepName: [None],
        # },
        {
            stepName: [SelectFromModel(lsvc)]
        },
        {
            stepName: [SelectKBest(score_func=f_classif)],
            prefix + 'k': numFeats,
        },
        {
            stepName: [PCA()],
            prefix + 'n_components': numComponents,
        },
    ]

    models = getModels('model')
    dimTransformers = getDimTransformers(
        'dimTransform',
        numFeatures=XTrain.shape[1]
    )

    pipe = Pipeline(steps=[
        # Need to initialize to s/t, just use the first one
        ('dimTransform', dimTransformers[0]['dimTransform'][0]),
        ('model', models[0]['model'][0]),
    ])

    param_grid = [
        {**m, **d} for m, d in itertools.product(models, dimTransformers)
    ]

    grid = EvolutionaryAlgorithmSearchCV(
        estimator=pipe,
        params=param_grid,
        scoring=scoring,
        cv=list(TimeSeriesSplit(n_splits=10).split(XTrain)),
        verbose=1,
        population_size=50,
        gene_mutation_prob=0.10,
        gene_crossover_prob=0.5,
        tournament_size=3,
        generations_number=5,
        n_jobs=4
    )

stuck after gen 1...

image

I have some datasets where the search get stuck for ever on gen 1 for instance.. does it happen to you too? how can I figure out what is the problem?
python is still running and using a lot of CPU... but after hours nothing happens. any idea what could be the issue?

warning message-i got these warning messages, dont know how to solve this

C:\Users\rabia\Anaconda3\lib\site-packages\sklearn\cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
C:\Users\rabia\Anaconda3\lib\site-packages\sklearn\grid_search.py:43: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.
DeprecationWarning)

Enabling early-stopping in cv with proper eval_set

Hi,

is there a way to enable early-stopping as part of EvolutionaryAlgorithmSearchCV when cv=KFold()?

I think I understood, it is not part of the Scikit-learn API because it is not passed onto the fit() function of the estimator.

It would be beneficial for LightGBM and others who provided early_stopping_rounds functionality based on an eval_metric.

Any suggestion for a temporary fix? All it needs is for example to pass fit_params = {
early_stopping_rounds= 1000,
eval_metric= 'auc',
eval_set=[(train_x, train_y), (valid_x, valid_y)]) to fit() with the same fit(train_x, train_y) as in eval_set.

Would a change work in this section?
...
for train, test in cv.split(X, y):
assert len(train) > 0 and len(test) > 0, "Training and/or testing not long enough for evaluation."
_score = _fit_and_score(estimator=individual.est, X=X, y=y, scorer=scorer,
train=train, test=test, verbose=verbose,
parameters=parameters, fit_params=fit_params, error_score=error_score)[0]
...
Something like":
fit_params.update({'eval_set': [(X[train], y[train]),(X[test], y[test])]})

Thanks

Sklearn Depreciation

cross_validation has been replaced with model_selection and will soon be depreciated. Already getting a warning. Tried to simply change this but they have moved a few other things around and also changed how some functions seem to fundamentally work.

What does it take to parallelize the search?

Great tool! Allows me to drastically expand the search space over using GridSearchCV. Really promising for deep learning, as well as standard scikit-learn interfaced ML models.

Because I'm searching over a large space, this obviously involves training a bunch of models, and doing a lot of computations. scikit-learn's model training parallelizes this to ease the pain somewhat.

I tried using the toolbox.register('map', pool.map) approach as described out by deap, but didn't see any parallelization.

Is there a different approach I should take instead? Or is that a feature that hasn't been built yet? If so, what are the steps needed to get parallelization working?

Cannot persist an EvolutionarySearchCV object

I am trying to persist with joblib several EvolutionarySearchCV objects.
I have used this code:

`ev_search = []
it = 0
import warnings
import time
from sklearn.externals import joblib

t0 = time.time()

with warnings.catch_warnings():
warnings.simplefilter("ignore")
for n, mod, par in zip(names, mods, params):
print('Estimating ', n ,'...', sep='')
ev = EvolutionaryAlgorithmSearchCV(estimator=mod,
params=par,
scoring="roc_auc",
cv=kf,
verbose=0,
population_size=100,
gene_mutation_prob=0.10,
gene_crossover_prob=0.5,
tournament_size=3,
generations_number=10,
n_jobs=4)
ev.fit(X_train, y_train)
ev_search.append(ev)
#with open('./models_final/evv_' + n + '.pkl', 'wb') as f:
# pickle.dump(ev, f)
joblib.dump(ev, ('./models_final/evv_' + n + '.pkl'))
#print('Time elapsed:', str(t0 - time.time()))
print('Saved ', n ,' to disk', sep='')
print('----------------')
print('')
`

When I try to load it

x = joblib.load('./models_final/evv_XGBoost.pkl')

I get this error:

`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
----> 1 x = joblib.load('./models_final/evv_XGBoost.pkl')

~/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/numpy_pickle.py in load(filename, mmap_mode)
576 return load_compatibility(fobj)
577
--> 578 obj = _unpickle(fobj, filename, mmap_mode)
579
580 return obj

~/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/numpy_pickle.py in _unpickle(fobj, filename, mmap_mode)
506 obj = None
507 try:
--> 508 obj = unpickler.load()
509 if unpickler.compat_mode:
510 warnings.warn("The file '%s' has been generated with a "

~/anaconda3/lib/python3.6/pickle.py in load(self)
1048 raise EOFError
1049 assert isinstance(key, bytes_types)
-> 1050 dispatchkey[0]
1051 except _Stop as stopinst:
1052 return stopinst.value

~/anaconda3/lib/python3.6/pickle.py in load_global(self)
1336 module = self.readline()[:-1].decode("utf-8")
1337 name = self.readline()[:-1].decode("utf-8")
-> 1338 klass = self.find_class(module, name)
1339 self.append(klass)
1340 dispatch[GLOBAL[0]] = load_global

~/anaconda3/lib/python3.6/pickle.py in find_class(self, module, name)
1390 return _getattribute(sys.modules[module], name)[0]
1391 else:
-> 1392 return getattr(sys.modules[module], name)
1393
1394 def load_reduce(self):

AttributeError: module 'deap.creator' has no attribute 'Individual'`

Fitness function

Could you please shed some light on Fitness function that is being used here?

Project maintained

Hello,

Just wondering if this project is maintained ?

Thanks, regards

Do we not create a new instance of the estimator for each fit of cv?

I'm running into a weird issue, and I'm hoping someone who knows this library better might understand better.

Using this as a drop-in replacement for GridSearchCV in an already-built sklearn pipeline generally works well. It works across most of the different predictors we use in auto_ml. It even works with Keras scikit-learn-wrapped deep learning, which is my primary use case.

However, I recently started using ModelCheckpoint. So now what I do is I train the model, it stops training, and at the end I select the epoch of the model that had the best score. That best model then is the result of calling .fit on a single set of params.

This works just fine with GridSearchCV, but, for some reason, does not work with sklearn-deap.

It seems to be attempting to re-fit this same already-trained model. I'm running cv=2, and the error appears to happen on the second fit for a given set of model params. Is sklearn-deap trying to fit the same model twice for the same set of params, rather than creating a new instance of that model for the second fit on the other cv dataset?

Or, is there anything else that might help explain why I'm getting this odd behavior only on the second fit of a cv=2 search, and only from sklearn-deap, not GSCV?

Error using evolutionary search for MLPRegressor Parameter optimization

Following is my code:
from evolutionary_search import EvolutionaryAlgorithmSearchCV
from sklearn.model_selection import StratifiedKFold

paramgrid = {"activation": ["identity", "logistic", "tanh", "relu"],
"solver" : ["lbfgs", "sgd", "adam"],
"hidden_layer_sizes" : [1,2,3,4],
"max_iter" : [100,150]}

cv = EvolutionaryAlgorithmSearchCV(estimator=MLPRegressor(),
params=paramgrid,
scoring="accuracy",
cv=StratifiedKFold(n_splits=2),
verbose=1,
population_size=50,
gene_mutation_prob=0.10,
gene_crossover_prob=0.5,
tournament_size=3,
generations_number=5,
n_jobs=4)
cv.fit(x,y)

The Error I get:

ValueErrorTraceback (most recent call last)
in ()
20 generations_number=5,
21 n_jobs=4)
---> 22 cv.fit(x,y)

/usr/share/anaconda2/lib/python2.7/site-packages/evolutionary_search/cv.pyc in fit(self, X, y)
350 for possible_params in self.possible_params:
351 _check_param_grid(possible_params)
--> 352 self.fit(X, y, possible_params)
353 if self.refit:
354 self.best_estimator
= clone(self.estimator)

/usr/share/anaconda2/lib/python2.7/site-packages/evolutionary_search/cv.pyc in _fit(self, X, y, parameter_dict)
419 pop, logbook = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
420 ngen=self.generations_number, stats=stats,
--> 421 halloffame=hof, verbose=self.verbose)
422
423 # Save History

/usr/share/anaconda2/lib/python2.7/site-packages/deap/algorithms.pyc in eaSimple(population, toolbox, cxpb, mutpb, ngen, stats, halloffame, verbose)
145 # Evaluate the individuals with an invalid fitness
146 invalid_ind = [ind for ind in population if not ind.fitness.valid]
--> 147 fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
148 for ind, fit in zip(invalid_ind, fitnesses):
149 ind.fitness.values = fit

/usr/share/anaconda2/lib/python2.7/multiprocessing/pool.pyc in map(self, func, iterable, chunksize)
249 '''
250 assert self._state == RUN
--> 251 return self.map_async(func, iterable, chunksize).get()
252
253 def imap(self, func, iterable, chunksize=1):

/usr/share/anaconda2/lib/python2.7/multiprocessing/pool.pyc in get(self, timeout)
565 return self._value
566 else:
--> 567 raise self._value
568
569 def _set(self, i, obj):

ValueError: continuous is not supported

这个模型能否为回归模型进行调整参数呢?

我尝试调用回归模型进行调参,会有报错,说是无法使用连续的值。从源码中我也看到的is_claaifier()的代码,就是来询问模型是否是分类器。所以我就想问一下这个包能否为回归模型调整参数呢?

scores for individuals in score_results are wrong

Here is the code I use:

import numpy as np
import evolutionary_search as ev_se

bound_pos = [0,1]
bound_neg = [-1,0]
data = dict()
for i in xrange(2):
        if i <=0:
                data["param_"+str(i)] = np.arange(bound_pos[0],bound_pos[1], 0.1)
        else:
                data["param_"+str(i)] = np.arange(bound_neg[0],bound_neg[1], 0.1)

def fit(param_0,param_1):
        sum1 = param_0+param_1
        return sum1

best_params, best_score, score_results, hist,log = ev_se.maximize(fit,data,args={},verbose=True,generations_number=100)
============================================
output: Best individual is: {'param_1': -0.1000000000000002, 'param_0': 0.90000000000000002}
with fitness: 0.8
============================================
However, when I look for all individuals with fitness 0.8 (there should be none except the best individual) in score_results:
===========================
print score_results
(({'param_0': 0.90000000000000002, 'param_1': -0.90000000000000002},
  0.79999999999999982),
 ({'param_0': 0.0, 'param_1': -0.20000000000000018}, 0.79999999999999982),
 ({'param_0': 0.90000000000000002, 'param_1': -0.80000000000000004},
  0.79999999999999982),
..........................................
=======================================

This is clearly incorrect , because for the 1st individual : {'param_0': 0.90000000000000002, 'param_1': -0.90000000000000002}, the score should be 0.0

AttributeError: Can't get attribute 'Individual' on

#24
still have the problem
Traceback (most recent call last):
File "C:\Users\uesr\Anaconda3\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\Users\uesr\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\uesr\Anaconda3\lib\multiprocessing\pool.py", line 108, in worker
task = get()
File "C:\Users\uesr\Anaconda3\lib\multiprocessing\queues.py", line 337, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'Individual' on <module 'deap.creator' from 'C:\Users\uesr\Anaconda3\lib\site-packages\deap\creator.py'>

package version:
deap-1.2.2
sklearn-deap-0.2.2

`.cv_results_` does not include info from first generation

I think there's a fenceposting/off-by-one error somewhere.

When I pass in generations_number = 1, it's actually 0-indexed, and gives me 2 generations. Similarly, if I pass in 2 generations, I actually get 3.

Then, when I examine the cv_results_ property, I noticed that I only get the results from all generations after the first generation (the 0-indexed generation).

This is most apparently if you set generations_number = 1.

I looked through the code quickly, but didn't see any obvious source of it. Hopefully someone who knows the library can find it more easily!

Doubts about encoding correctness

I have some doubts about current parameter encoding (to chromosome) correctness.

Let's assume that we have 2 categorical parameters f1 and f2:

Enc f1  f2
0000 a 1
0001 a 2
0010 a 3
0011 a 4
0100 a 5
0101 b 1
0110 b 2
0111 b 3
1000 b 4
1001 b 5
1010 c 1
1011 c 2
1100 c 3
1101 c 4
1110 c 5

If we use any crossover operator, for example let's do 2 points crossover between some points:

(a, 4) 0011    0111 (b, 3)
            x
(c, 3) 1100    1000 (b, 4)

After crossover we've got b, but both parents don't have b as first parameter.

Can Not Import the EvolutionaryAlgorithmSearchCV Call

Using the exact code sample, when I ran the code, I get an import error.

Traceback (most recent call last):
File "test.py", line 18, in
from evolutionary_search import EvolutionaryAlgorithmSearchCV
File "D:\Anaconda2\lib\site-packages\evolutionary_search_init_.py", line 1, in
from .cv import *
File "D:\Anaconda2\lib\site-packages\evolutionary_search\cv.py", line 6, in
from deap import base, creator, tools, algorithms
File "D:\Python\zillow\deap.py", line 25, in
from evolutionary_search import EvolutionaryAlgorithmSearchCV
ImportError: cannot import name EvolutionaryAlgorithmSearchCV

`import sklearn.datasets
import numpy as np
import random

data = sklearn.datasets.load_digits()
X = data["data"]
y = data["target"]

from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold

paramgrid = {"kernel": ["rbf"],
             "C"     : np.logspace(-9, 9, num=25, base=10),
             "gamma" : np.logspace(-9, 9, num=25, base=10)}

random.seed(1)

from evolutionary_search import EvolutionaryAlgorithmSearchCV
cv = EvolutionaryAlgorithmSearchCV(estimator=SVC(),
                                   params=paramgrid,
                                   scoring="accuracy",
                                   cv=StratifiedKFold(n_splits=4),
                                   verbose=1,
                                   population_size=50,
                                   gene_mutation_prob=0.10,
                                   gene_crossover_prob=0.5,
                                   tournament_size=3,
                                   generations_number=5,
                                   n_jobs=4)
cv.fit(X, y)`

deap

Hi,
I'm having some problems regarding "import deap".
Wondering if you had similar problem or know how to fix? :)

The problem starts with a line in deap/base.py:

Traceback (most recent call last):
File "assignment1.py", line 17, in
from sklearn_deap.evolutionary_search import EvolutionaryAlgorithmSearchCV
File "/home/user/anaconda3/lib/python3.5/site-packages/sklearn_deap/evolutionary_search/init.py", line 1, in
from .cv import *
File "/home/user/anaconda3/lib/python3.5/site-packages/sklearn_deap/evolutionary_search/cv.py", line 8, in
from deap import base, creator, tools, algorithms
File "/home/user/anaconda3/lib/python3.5/site-packages/deap/base.py", line 188
raise TypeError, ("Both weights and assigned values must be a "
^
SyntaxError: invalid syntax

I dont really understand this syntax, if it is wrong or not.
I tried removing it. It managed to pass it and then got error regarding print statement (python 2 syntax without parentheses).
So I did 2to3 on the code in deap library and now I get this error:

/home/user/anaconda3/lib/python3.5/site-packages/deap/tools/_hypervolume/pyhv.py:33: ImportWarning: Falling back to the python version of hypervolume module. Expect this to be very slow.
"module. Expect this to be very slow.", ImportWarning)
Traceback (most recent call last):
File "assignment1.py", line 250, in
cv = EvolutionaryAlgorithmSearchCV(estimator=svm.SVC(), params=paramgrid, scoring="accuracy", cv=kf, verbose=1, population_size=50, gene_mutation_prob=0.10, gene_crossover_prob=0.5, tournament_size=3, generations_number=5, n_jobs=4)
File "/home/user/anaconda3/lib/python3.5/site-packages/sklearn_deap/evolutionary_search/cv.py", line 305, in init
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
File "/home/user/anaconda3/lib/python3.5/site-packages/deap/creator.py", line 145, in create
for obj_name, obj in kargs.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'

Any idea whats might be going on? :)

Python3 compatibility is broken

There are two old-style print statements in __init__.py that break compatibility with Python 3.

I added brackets to turn them into function calls and that seemed to fix it, but I have not done extensive testing to see if there are any other compatibility issues.

Error Message While Calling fit() Method

AttributeError: can't set attribute

It pointed out the error come from fit( ) method as

def fit(self, X, y=None):
self.best_estimator_ = None
--> self.best_score_ = -1
self.best_params_ = None
for possible_params in self.possible_params:
self.fit(X, y, possible_params)
if self.refit:
self.best_estimator
= clone(self.estimator)
self.best_estimator_.set_params(**self.best_params_)
self.best_estimator_.fit(X, y)

Doesn't work with TimeSeriesSplit

sklearn-deap doesn't seem to like it when I use TimeSeriesSplit even though TimeSeriesSplit should work like all other cross validation functions in sklearn. I've been using TimeSeriesSplit with Pipeline and GridSearchCV fine; I just replaced the GridSearchCV with EvolutionaryAlgorithmSearchCV.

    from sklearn.model_selection import TimeSeriesSplit
    grid = EvolutionaryAlgorithmSearchCV(
        estimator=pipe,
        params=param_grid,
        scoring=scoring,
        cv = TimeSeriesSplit(n_splits=10),
        verbose=1,
        population_size=50,
        gene_mutation_prob=0.10,
        gene_crossover_prob=0.5,
        tournament_size=3,
        generations_number=5,
        n_jobs=4
    )
    grid.fit(XTrain, yTrain)
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/evolutionary_search/__init__.py", line 284, in fit
    self._fit(X, y, possible_params)
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/evolutionary_search/__init__.py", line 344, in _fit
    halloffame=hof, verbose=self.verbose)
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/deap/algorithms.py", line 147, in eaSimple
    fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/pool.py", line 260, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/pool.py", line 608, in get
    raise self._value
TypeError: 'TimeSeriesSplit' object is not iterable

njobs > 1 throws error AttributeError: Can't get attribute 'Individual' on <module 'deap.creator' from '\\site-packages\\deap\\creator.py'>

Whenever I set njobs > 1 I get the errors, copied below. When njobs = 1, it works fine, but without njobs > 1 it's not much better than GridSearchCV.

`Process SpawnPoolWorker-19:
Traceback (most recent call last):
File "\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "\lib\multiprocessing\pool.py", line 108, in worker
task = get()
File "\lib\multiprocessing\queues.py", line 337, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'Individual' on <module 'deap.creator' from lib\site-packages\deap\creator.py'>
C:\Users\mdriscoll6\Anaconda3\envs\CSE6250-BD4H-3.6\lib\site-packages\sklearn\utils\deprecation.py:143: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
\lib\site-packages\sklearn\utils\deprecation.py:143: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
\lib\site-packages\sklearn\utils\deprecation.py:143: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
lib\site-packages\sklearn\utils\deprecation.py:143: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
lib\site-packages\sklearn\utils\deprecation.py:143: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)

Process finished with exit code -1
`

Can't get attribute 'Individual'

Trying to test example code on Indian Pima Diabetes dataset in Jupyter notebook 5.0.0, Python 3.6, I'm getting an error. Kernel is busy but no processes are running. Turning on debag mode shows:
...
File "c:\users\szymon\anaconda3\envs\tensorflow\lib\multiprocessing\queues.py", line 345, in get return ForkingPickler.loads(res) AttributeError: Can't get attribute 'Individual' on <module 'deap.creator' from 'c:\\users\\szymon\\anaconda3\\envs\\tensorflow\\lib\\site-packages\\deap\\creator.py'> File "c:\users\szymon\anaconda3\envs\tensorflow\lib\multiprocessing\pool.py", line 108, in worker task = get()

Does not work with pipelines

For tuning a single estimator this tool is awesome. But the standard gridsearch can actually accept a pipeline as an estimator, which allows you to evaluate different classifiers as parameters.

For some reason, this breaks with EvolutionaryAlgorithmSearchCV.

For example, set a pipeline like this:
pipe = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler' , StandardScaler()),
('classify', LogisticRegression())
])

Then define a parameter grid to include different classifiers:
param_grid_rf_big = [
{'classify': [RandomForestClassifier(),ExtraTreesClassifier()],
'classify__n_estimators': [500],
'classify__max_features': ['log2', 'sqrt', None],
'classify__min_samples_split': [2,3],
'classify__min_samples_leaf': [1,2,3],
'classify__criterion': ['gini',]
}
]

When you pass this to EvolutionaryAlgorithmSearchCV you should be able to set the estimator to 'pipe' and and the params to 'param_grid_rf_big' and let it evaluate. This works with gridsearchcv, but not with EvolutionaryAlgorithmSearchCV.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.