Giter VIP home page Giter VIP logo

fable-3dxrd / imaged11 Goto Github PK

View Code? Open in Web Editor NEW
14.0 7.0 24.0 45.73 MB

ImageD11 is a python code for identifying individual grains in spotty area detector X-ray diffraction images.

Home Page: https://imaged11.readthedocs.io/

License: GNU General Public License v2.0

Python 48.41% Batchfile 0.03% Shell 0.08% C 9.69% Fortran 1.94% C++ 0.07% JavaScript 0.65% Jupyter Notebook 39.06% Cython 0.07%
python research xray-crystallography xray-diffraction-analysis synchrotron xray-diffraction

imaged11's People

Contributors

axelhenningsson avatar haixing0a avatar jadball avatar jbjacob94 avatar jonwright avatar kif avatar smerkel avatar t20100 avatar younes-elhachi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

imaged11's Issues

Installing on Mac OSX

I'm trying to install on Mac OSX and are having some issues:

  1. The compiler flags in setup.py are for the gcc compiler. The default C compiler in OSX is clang. I think I've solved this by installing gcc (from homebrew) and then building the code like this
CC="gcc-9" python setup.py build bdist_wheel
  1. After installing the wheel I just built with
pip install dist/ImageD11-1.9.7-cp38-cp38-macosx_10_15_x86_64.whl

and trying to launch the gui I get the following error:

(base) hektor@Johans-jobb ImageD11 % conda activate fable
(fable) hektor@Johans-jobb ImageD11 % ImageD11_gui.py 
Traceback (most recent call last):
  File "/Users/hektor/opt/anaconda3/envs/fable/bin/ImageD11_gui.py", line 52, in <module>
    from ImageD11 import __version__, guicommand
  File "/Users/hektor/opt/anaconda3/envs/fable/lib/python3.8/site-packages/ImageD11/guicommand.py", line 33, in <module>
    from ImageD11 import peakmerge, indexing, transformer, eps_sig_solver
  File "/Users/hektor/opt/anaconda3/envs/fable/lib/python3.8/site-packages/ImageD11/indexing.py", line 24, in <module>
    from . import cImageD11, unitcell
  File "/Users/hektor/opt/anaconda3/envs/fable/lib/python3.8/site-packages/ImageD11/cImageD11.py", line 6, in <module>
    from ImageD11._cImageD11 import *
ImportError: dlopen(/Users/hektor/opt/anaconda3/envs/fable/lib/python3.8/site-packages/ImageD11/_cImageD11.cpython-38-darwin.so, 2): Symbol not found: _GOMP_loop_nonmonotonic_dynamic_next
  Referenced from: /Users/hektor/opt/anaconda3/envs/fable/lib/python3.8/site-packages/ImageD11/_cImageD11.cpython-38-darwin.so
  Expected in: flat namespace
 in /Users/hektor/opt/anaconda3/envs/fable/lib/python3.8/site-packages/ImageD11/_cImageD11.cpython-38-darwin.so

I suspect this is related to the build problem somehow but I don't know how to solve it. Any ideas?

Error when calling grid_index_parallel

I'm receiving an error message when I try to use the grid_index_parallel module inside a custom script I am writing.

Instead of specifying the input flt file, par file etc from the command line, I am setting them separately:

from ImageD11.grid_index_parallel import grid_index_parallel
clean_peaks = '/path/to/cleanpeaks.flt'
optimized_pars = '/path/to/optimized_pars.par'
gridpars = {
    "DSTOL": 0.035,
    "OMEGAFLOAT": 0.5,
    "COSTOL": 0.002,
    "NPKS": 40,
    "TOLSEQ": [0.02, 0.015, 0.01],
    "SYMMETRY": "cubic",
    "RING1": [1, 0],
    "RING2": [1, 0],
    "NUL": True,
    "FITPOS": True,
    "tolangle": 0.5,
    "toldist": 172 * 2,
}
translations = [
    (t_x, t_y, t_z)
    for t_x in range(-600, 601, 400)
    for t_y in range(-600, 601, 400)
    for t_z in range(-600, 601, 400)
]

<code snipped>

if indexed:
    print("Already indexed!")
else:
    random.seed(int(time.time()))
    random.shuffle(translations)
    grid_index_parallel(clean_peaks, optimized_pars, 'out', gridpars, translations)

My custom script has it's own input and output arguments which are formatted differently than usual.
The rest of my code runs fine, but when we drop into multithreading:

WARNING:root:titles are sc  fc  omega
WARNING:root:titles are sc  fc  omega
Done init
Using a pool of 15 processes
Hello from ForkPoolWorker-1 7277 5 to doout.flt 95664
Hello from ForkPoolWorker-2 7278 5 to doout.flt 95664
Hello from ForkPoolWorker-4 7280 5 to doout.flt 95664
Hello from ForkPoolWorker-3 7279 5 to doout.flt 95664
Hello from ForkPoolWorker-5 7281 4 to doout.flt 95664
Hello from ForkPoolWorker-6 7282 4 to doout.flt 95664
Hello from ForkPoolWorker-7 7283 4 to doout.flt 95664
Hello from ForkPoolWorker-9 7285 4 to doout.flt 95664
Hello from ForkPoolWorker-8 7284 4 to doout.flt 95664
Hello from ForkPoolWorker-10 7286 4 to doout.flt 95664
Hello from ForkPoolWorker-11 7287 4 to doout.flt 95664
Hello from ForkPoolWorker-12 7288 4 to doout.flt 95664
Hello from ForkPoolWorker-13 7289 4 to doout.flt 95664
Hello from ForkPoolWorker-14 7290 4 to doout.flt 95664
Hello from ForkPoolWorker-15 7291 4 to doout.flt 95664
WARNING:root:titles are sc  fc  omega
WARNING:root:titles are sc  fc  omega
Caught exception in worker thread
Traceback (most recent call last):
  File "lib/python3.7/site-packages/ImageD11/grid_index_parallel.py", line 245, in wrap_test_many_points
    test_many_points( x )
  File "lib/python3.7/site-packages/ImageD11/grid_index_parallel.py", line 146, in test_many_points
    mytransformer.loadfileparameters(  parameters )
  File "lib/python3.7/site-packages/ImageD11/transformer.py", line 242, in loadfileparameters
    self.parameterobj.loadparameters(filename)
  File "lib/python3.7/site-packages/xfab/parameters.py", line 183, in loadparameters
    lines = open(filename,"r").readlines()
FileNotFoundError: [Errno 2] No such file or directory: '95664'

This number '95664' is actually sys.argv[2] in my script, so it feels like somehow that's getting passed to the grid_index_parallel module.

Any ideas?

Thanks for your continued help.

omega_slop in indexing

Sort out the problem with omega errors being different to tth/eta errors for indexing. e.g. replace the hkl_tol with something more sensible.

Peaksearching...

Output peaks into a hdf file directly and make ascii files optional
Include basic filtering during peaksearch (npixels > x)
Separate the thresholding, labelling, properties from each other so you can do:
smoothing filter -> labels
original data + labels -> centre of mass

Cannot open files produced by powerimagetopeaks.py

Hello,

I'm having problems with opening edf files generated with powderimagetopeaks.py
When I open the files with fabian.py or try to peaksearch them, I receive the following error message:

Traceback (most recent call last):
  File ".local/bin/fabian.py", line 33, in <module>
    start()
  File ".local/bin/fabian.py", line 28, in start
    mainwin = appWin.appWin(root,filename=f,zoomfactor=0.5,mainwin='yes')
  File ".local/lib/python2.7/site-packages/Fabian/appWin.py", line 1097, in __init__
    self.openimage()
  File ".local/lib/python2.7/site-packages/Fabian/appWin.py", line 1923, in openimage
    img = openimage.openimage(filename)
  File "anaconda/4.6.14/64/envs/python2.7/lib/python2.7/site-packages/fabio/openimage.py", line 159, in openimage
    obj = obj.read(obj.filename, frame)
  File "anaconda/4.6.14/64/envs/python2.7/lib/python2.7/site-packages/fabio/edfimage.py", line 788, in read
    raise e
IOError: Invalid first header

I do not have the same problem if I write another file format with powderimagetopeaks such as .CBF

Strain conventions

I compared strains from DeformationGradientTensor to strains from xfab/tools/ubi_to_u_and_eps.
It fails even in the reference co-ordinate system. Below is an example of my 1st test.

import numpy as np
from ImageD11.grain import grain
from ImageD11.finite_strain import e6_to_symm, DeformationGradientTensor as Ftensor
from xfab.tools import ubi_to_cell, ubi_to_u_and_eps

def strain_xfab(g1, g0):
    uc = ubi_to_cell(g0)
    U, E6 = ubi_to_u_and_eps(g1, uc)
    E = e6_to_symm(E6)
    e = np.dot(U, np.dot(E, U.T))
    return E, e

def test_strain(csys, n):
    ubis0 = [grain(np.random.random((3,3))).ubi for i in range(n)]
    ubis1 = [grain(np.random.random((3,3))).ubi for i in range(n)]
    E_Ftensor = [Ftensor(ubis1[i],ubis0[i]).finite_strain_ref() for i in range(n)]
    e_Ftensor = [Ftensor(ubis1[i],ubis0[i]).finite_strain_lab() for i in range(n)]
    E_xfab = [strain_xfab(ubis1[i],ubis0[i])[0] for i in range(n)]
    e_xfab = [strain_xfab(ubis1[i],ubis0[i])[1] for i in range(n)]
    if csys == 'reference':
        for i in range(n):
            assert np.allclose(E_Ftensor[i], E_xfab[i])
        return 'seems the strains in %s co-ordinate system are OK'%csys
    elif csys == 'lab':
        for i in range(n):
            assert np.allclose(e_Ftensor[i], e_xfab[i])
        return 'seems the strains in %s co-ordinate system are OK'%csys
    else:
        return 'csys is reference or lab'

test_strain('reference', 1000)
test_strain('lab', 1000)

Fly scanning support?

Hello,

I was wondering whether ImageD11, in principle, supports a variable omega step between scans?
Our idea to speed up data collection is to perform a fly scan with our tomography stage and constantly poll the detector with a 1 second exposure such that each second of exposure equates to an omega step of around 0.2 deg.
Is a variable omega step supported by the peaksearcher when reading from header files?

Gui for peak filtering

Let users plot any column from a columnfile against any other (in 2D at least).

  • Filter peaks like in fable gui (draw polygon with point inside test)
  • record the filtering that was done to make scripts later

openmp with multiprocessing and benchmarking

When running peaksearch or grid_index_parallel or refine_em there are some multiprocessing options that do not play well with threading. Multiprocessing launches Ncores jobs and each job launches Ncores threads so you end up with (Ncores x Ncores) fighting for resources. OK when there are 2 cores giving 4 tasks, a bit of an overhead with 64*64 = 4096.

Options/issues:

  • Offer and test a cImageD11.set_num_threads() -> call omp_set_num_threads
  • (better but py3 only?) https://github.com/joblib/threadpoolctl , also does numpy
  • Use : dask? joblib? etc replacing multiprocessing to also spread work on a cluster
  • OAR/SLURM: What is omp_get_max_threads and multiprocessing.cpu_count() versus cores assigned?

Related: add some benchmarking / testing code to verify whether cImageD11 functions are working properly and how they are performing versus thread count.

Peaksearch.py returns wrong omega values

Hi,

when using peaksearch.py on a series of diffraction patterns with an omega step size of 4 / 3 °, the resulting peaks.flt contains omega values which are not reasonable. E.g. peaks occur with omega values of 14.666666 °, but there is also one single peak with an omega value of 14.2796 °.

Going through peaksearcher.py I could locate that this altering of omega occurs in labelimage.py, when calling cImageD11.blob_moments. Just for completeness cImageD11.blob_moments is called in mergelast.

Is the altering of omega an intentional behaviour? I would expect no.

Best regards,
Silvio

assignlabels refresh

Hi Jon,

I think that sometimes the labels assignement (refinegrains.py/assignlabels) and other new columns of the ".new" file are not correctly updated for all the peaks; especially when a user do several successive makemap. I noticed that with my data, knowing that I used the version currently installed on rnice.
Here you find a small code to test the assignement (I assumed that the gx, gy, gz are the good values written to the .new file by the routine "refinegrains.py/assignlabels/compute_gv with translation + score_and_assign).
test_assign.txt

What does it give with a data of yours?

Happy Christmas and New Year

Best regards
Younes

Error when parsing makemap options with argparse

I am trying to parse the arguments for makemap.py so that I can call it in my wrapper function.

When I parse the arguments with argparse:

from argparse import Namespace, ArgumentParser
from makemap import makemap as go_make
from makemap import get_options as get_map_options
map_parser = get_map_options(parser)
map_options, args = map_parser.parse_known_args()

I receive the following error message:

Traceback (most recent call last):
  File "auto_single_letterbox.py", line 166, in <module>
    map_parser = get_map_options(parser)
  File "/.../ImageD11/scripts/makemap.py", line 63, in get_options
    parser = ImageD11.refinegrains.get_options(parser)
  File ".../site-packages/ImageD11/refinegrains.py", line 87, in get_options
    help="Name of input parameter file")
  File ".../argparse.py", line 1373, in add_argument
    return self._add_action(action)
  File ".../argparse.py", line 1736, in _add_action
    self._optionals._add_action(action)
  File ".../argparse.py", line 1577, in _add_action
    action = super(_ArgumentGroup, self)._add_action(action)
  File ".../argparse.py", line 1387, in _add_action
    self._check_conflict(action)
  File ".../argparse.py", line 1526, in _check_conflict
    conflict_handler(action, confl_optionals)
  File ".../argparse.py", line 1535, in _handle_conflict_error
    raise ArgumentError(action, message % conflict_string)
argparse.ArgumentError: argument -p/--parfile: conflicting option string: -p

Improve gui with 'cell_sg'

ImageD11 has a hidden cell_sg to provide a space group number for generating a list of unit cell predicted peaks rather than from lattice only.

The tkgui is old but my students use it (for the transformer section mostly) and keep forgetting about this cell_sg parameter. I could improve the gui to have a menu choice whether to use lattice only or space group number.

Could later be improved to start from a cif file as well.

Refinement during indexing: strained or larger samples

Unit cell may approximately known (but not exactly), e.g. high pressure or chemically different etc.
Grain position may be relatively far from the centre of rotation.

Still the far-field case that peaks can be assigned hkl indices from their twotheta.

Add a (constrained?) cell parameter refinement or position refinement (or both) during indexing.

Long axis indexing...

When you have a long axis then drlv is not a very good value for deciding if a peak is indexed. Better would be the error in pixels / omega.

Cythonize part of the code

[Possible enhancement]

Hi Jon,

I know you wrapped some C code into ImageD11 to enhance performance (simplex etc); what about cythonizing more modules?
A cython version of find(), scorethem(), refineposition() etc would make grid_index nested loops, for example, lot faster theoritically.
Most of the variables in these routines are numpy arrays, so I'm not sure if this enhancement would actually speed things up (since numpy is supposed to be relatively fast and is in part built with C).

Anyway the idea is, if you can make it let's say one order of magnitude faster, it would allow to do on-the-fly analysis to some extent during experiment / beamtime.

Average sample strain tensor?

See if we could fit an average sample strain in the transformation gui. Apply this to g-vectors to make indexing easier. Will need a bit of testing to see if you can limit it to two theta shifts?

Validating grains - completeness etc

Pointed out by Marta: in diffraction tomography we see the same hkl spots several times so that the number of indexed peaks is not a good figure of merit to accept a grain in indexing. Other candidate methods could include:

  • completeness criteria
  • number of different hkl values, something like : len(set( hkls ))
  • merge equivalent peaks (tth, eta, omega) prior to indexing them

Peaksearch incorrectly placing peaks

Hello,

Not sure whether this is a bug or something I am doing wrong.
Below is a single frame powder diffraction image from our detector:
image

Now, I use the following commands to generate peaks from it:

powderimagetopeaks.py input_image.cbf c0000.edf c0001.edf 730 680
peaksearch.py -n c -f 0 -l 1 -t 300 -t 600 -o ceo2.spt

Here's a quick look at c0000.edf:

image

It looks great!
However, when I plot the peaks in ImageD11:

image

Am I doing something wrong?
To me, it looks like peaksearch isn't counting the module dead zones as actual regions of the image?

Any inputs welcome.

Thanks!

ImageD11 needs a new release on PyPI : current does not install with pip

Hello,

When trying to install ImageD11 from PyPI with the following command:
pip install ImageD11
I receive the following error message:

    ERROR: Command errored out with exit status 1:
     command: /home/kuc84153/.virtualenvs/test/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-8c9jsL/imaged11/setup.py'"'"'; __file__='"'"'/tmp/pip-install-8c9jsL/imaged11/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-8c9jsL/imaged11/pip-egg-info
         cwd: /tmp/pip-install-8c9jsL/imaged11/
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-8c9jsL/imaged11/setup.py", line 37, in <module>
        import src.bldlib
    ImportError: No module named src.bldlib
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

This is on a fresh Python 2.7 virtualenv after installing the following modules with pip:

pip
numpy
scipy
pyopengl
PIL (from http://effbot.org/downloads/Imaging-1.1.7.tar.gz)

Am I missing a dependency?

Thanks.

Forward projection

  • Update forward_project.py script: with the current version, peaks coordinates are outside the detector area cf. image.
  • Future possibilities : make a UI above it (notebook, dashboard/plotly/bokeh, django web app?) to allow for visualisation & interactive info about peaks in each omega, compare to experimental/simulated images, etc.

clean up the openmp stuff ...

Spec is here for old msvc compatible pragmas : https://www.openmp.org/wp-content/uploads/cspec20.pdf

Today, makemap.py is using a single core but refine_em and grid_index run many processes. indexing does not seem to be threaded either.

  • add/use if clauses to skip threading overhead on small datasets
  • set the "right" number of threads depending what is being done (multiprocessing, especially grid_index or threading)

Clean up the indexing code

Seems we get several grains coming out that should have been removed by the uniqueness cutoff, but somehow get through ....

Issues resetting indexer when calling from wrapper script

Hello,

I have written a "wrapper" python script to ImageD11, so that it can index multiple 3DXRD scans, one after the other.
For each scan, I am running the prepatory work (peaksearching, cleaning, merging etc), then calling a function which indexes the current scan.

My indexing function is as follows:

def init_index(gvecs, index_pars, outubis):
    """
    Performs initial index of g-vectors (same as "Auto-find" in ImageD11 gui)".
    """
    start = timer()
    myindexer.loadpars(index_pars)
    myindexer.readgvfile(gvecs)
    myindexer.score_all_pairs()
    myindexer.saveubis(outubis)
    myindexer.reset()
    end = timer()
    print("Initial index took ", (end - start)/60, " minutes.")

If I do not include a myindexer.reset() line, the number of grains found does not reset between scans.
For example, if I found 40 grains in my first scan, when I went to index the second scan, the number of grains found would start at 40, then increase from there. I'd like each scan to be considered independent of the last.

I am having issues when trying to reset the indexer:

...
Tested 9 pairs and found 90 grains so far
Traceback (most recent call last):
  File "auto_single_letterbox.py", line 291, in <module>
    init_index(clean_gves, initial_index_pars, clean_ubis)
  File "auto_single_letterbox.py", line 118, in init_index
    myindexer.reset()
  File "/.../site-packages/ImageD11/indexing.py", line 319, in reset
    self.__dict__ = copy.deepcopy( self.__pristine_dict )
AttributeError: 'indexer' object has no attribute '_indexer__pristine_dict'

Any ideas?
Thanks in advance.

Grain map stitch script?

Hello,

Have you (or anyone else you know) written a script that stitches together grain maps?
We've just finished our data collection here at Diamond, and we did multiple vertically-stacked letterbox scans with an overlap, in an attempt to stitch the grain maps together afterwards.

If not, I have a script that currently does the following:

  1. Given two map files, and an estimation of the affine transformation matrix between them, use your match.py script to correspond grains in map 1 with those in the transformed map 2
  2. Use an iterative closest point method to estimate the actual affine transformation between the two map files, given your initial guess.
  3. Apply the new matrix to map 2, then get the number of corresponding grains between that new map 2 and map 1.

It seems to work reasonably well (we're getting decent numbers here), but I'd love your input on a strategy for stitching (we'd need to choose which grain to keep in the overlapping region).

Thanks,
James

plot3d.py

Got error when indexing -> plot x/y/z:

Got wavelength from gv file of 0.153684
Got wedge from gv file of 0.0
Read your gv file containing (2, 3)
DEBUG : python return array
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1544, in call
return self.func(*args)
File "/home/junyue/.local/lib/python2.7/site-packages/ImageD11/guiindexer.py", line 154, in plotxyz
self.plot3d = plot3d.plot3d(self.parent,gv)
File "/home/junyue/.local/lib/python2.7/site-packages/ImageD11/plot3d.py", line 84, in init
if data!=None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Error when running code from match.py

Hello,

Has some behaviour changed in xfab for the ubi_to_u_b function in tools?

When I call xfab.tools.ubi_to_u_b in the same way that you do in match.py:

for g in g1l + g2l:
g.u = xfab.tools.ubi_to_u_b(g.ubi)[0]
assert (abs(np.dot(g.u, g.u.T) - np.eye(3)).ravel()).sum() < 1e-6

I receive the following error message:

Traceback (most recent call last):
  File "translation_finder.py", line 247, in <module>
    g1_matched, g2_matched, ndt = find_matches(g1l, g2l, h, tolangle, toldist, init_pose)
  File "translation_finder.py", line 141, in find_matches
    g.u = xfab.tools.ubi_to_u_b(g.ubi)[0]
AttributeError: can't set attribute

Unless I've finally lost it, this behaviour seems to have changed over Christmas? My script was working before.

Any ideas?

Thanks

Sort out the Lorentz and Polarisation factors

Function lf() in refinegrains.py has some problems.

  • you cannot compute it without the position of the grain
    ... so it must come after indexing and per grain/voxel

  • do you use it for reciprocal space binning (rsv_mapper.py) ?
    ... option in crysalis to say "yes" or "no" and apparently "no" is "right"

  • it cannot be correct to divide by zero (except for topo-tomo?)

  • it depends on the wedge angle (e.g. topo-tomo case wedge=theta)

  • integrated intensity depends on integration of peak tails anyway

  • you also need polarisation (synchrotron with axis direction versus lab source with mono angle)

  • for broad peaks (texture) it varies across the peak

Suggestion : create some functions in xfab for computing lorentz and polarisation in terms of the fable geometry and call on them. Will need to add some parameters for polarisation.

Create a single launcher

  • Remove most of the clutter from people's paths (e.g. fewer scripts)
  • Use single launcher like "silx view" for example:
    ImageD11 peaksearch
    ImageD11 makemap
    ImageD11 plotgrainhist
    etc
  • Gui option for running jobs (argparse/gooey/argparseui)

indexing validation plots

Plot dstar versus intensity or number of peaks showing tick marks + assigned or not for rings.

Get hkls from indexer.unitcell.ringhkls, gve from indexer, plot gve on a pole figure together with gcalc = (inv(UB)).hkl for each grain.

Decide whether this is for the Tk gui or some new thing ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.