Giter VIP home page Giter VIP logo

graphmae's Introduction



GraphMAE: Self-Supervised Masked Graph Autoencoders

Implementation for KDD'22 paper: GraphMAE: Self-Supervised Masked Graph Autoencoders.

We also have a Chinese blog about GraphMAE on Zhihu (知乎), and an English Blog on Medium.

GraphMAE is a generative self-supervised graph learning method, which achieves competitive or better performance than existing contrastive methods on tasks including node classification, graph classification, and molecular property prediction.


❗ Update

[2023-04-12] GraphMAE2 is published and the code can be found here.

[2022-12-14] The PYG implementation of GraphMAE for node / graph classification is available at this branch.

Dependencies

  • Python >= 3.7
  • Pytorch >= 1.9.0
  • dgl >= 0.7.2
  • pyyaml == 5.4.1

Quick Start

For quick start, you could run the scripts:

Node classification

sh scripts/run_transductive.sh <dataset_name> <gpu_id> # for transductive node classification
# example: sh scripts/run_transductive.sh cora/citeseer/pubmed/ogbn-arxiv 0
sh scripts/run_inductive.sh <dataset_name> <gpu_id> # for inductive node classification
# example: sh scripts/run_inductive.sh reddit/ppi 0

# Or you could run the code manually:
# for transductive node classification
python main_transductive.py --dataset cora --encoder gat --decoder gat --seed 0 --device 0
# for inductive node classification
python main_inductive.py --dataset ppi --encoder gat --decoder gat --seed 0 --device 0

Supported datasets:

  • transductive node classification: cora, citeseer, pubmed, ogbn-arxiv
  • inductive node classification: ppi, reddit

Run the scripts provided or add --use_cfg in command to reproduce the reported results.

Graph classification

sh scripts/run_graph.sh <dataset_name> <gpu_id>
# example: sh scripts/run_graph.sh mutag/imdb-b/imdb-m/proteins/... 0 

# Or you could run the code manually:
python main_graph.py --dataset IMDB-BINARY --encoder gin --decoder gin --seed 0 --device 0

Supported datasets:

  • IMDB-BINARY, IMDB-MULTI, PROTEINS, MUTAG, NCI1, REDDIT-BINERY, COLLAB

Run the scripts provided or add --use_cfg in command to reproduce the reported results.

Molecular Property Prediction

Please refer to codes in ./chem for molecular property prediction.

Datasets

Datasets used in node classification and graph classification will be downloaded automatically from https://www.dgl.ai/ when running the code.

Experimental Results

Node classification (Micro-F1, %):

Cora Citeseer PubMed Ogbn-arxiv PPI Reddit
DGI 82.3±0.6 71.8±0.7 76.8±0.6 70.34±0.16 63.80±0.20 94.0±0.10
MVGRL 83.5±0.4 73.3±0.5 80.1±0.7 - - -
BGRL 82.7±0.6 71.1±0.8 79.6±0.5 71.64±0.12 73.63±0.16 94.22±0.03
CCA-SSG 84.0±0.4 73.1±0.3 81.0±0.4 71.24±0.20 73.34±0.17 95.07±0.02
GraphMAE(ours) 84.2±0.4 73.4±0.4 81.1±0.4 71.75±0.17 74.50±0.29 96.01±0.08

Graph classification (Accuracy, %)

IMDB-B IMDB-M PROTEINS COLLAB MUTAG REDDIT-B NCI1
InfoGraph 73.03±0.87 49.69±0.53 74.44±0.31 70.65±1.13 89.01±1.13 82.50±1.42 76.20±1.06
GraphCL 71.14±0.44 48.58±0.67 74.39±0.45 71.36±1.15 86.80±1.34 89.53±0.84 77.87±0.41
MVGRL 74.20±0.70 51.20±0.50 - - 89.70±1.10 84.50±0.60 -
GraphMAE(ours) 75.52±0.66 51.63±0.52 75.30±0.39 80.32±0.46 88.19±1.26 88.01±0.19 80.40±0.30

Transfer learning on molecular property prediction (ROC-AUC, %):

BBBP Tox21 ToxCast SIDER ClinTox MUV HIV BACE Avg.
AttrMasking 64.3±2.8 76.7±0.4 64.2±0.5 61.0±0.7 71.8±4.1 74.7±1.4 77.2±1.1 79.3±1.6 71.1
GraphCL 69.7±0.7 73.9±0.7 62.4±0.6 60.5±0.9 76.0±2.7 69.8±2.7 78.5±1.2 75.4±1.4 70.8
GraphLoG 72.5±0.8 75.7±0.5 63.5±0.7 61.2±1.1 76.7±3.3 76.0±1.1 77.8±0.8 83.5±1.2 73.4
GraphMAE(ours) 72.0±0.6 75.5±0.6 64.1±0.3 60.3±1.1 82.3±1.2 76.3±2.4 77.2±1.0 83.1±0.9 73.8

Citing

If you find this work is helpful to your research, please consider citing our paper:

@inproceedings{hou2022graphmae,
  title={GraphMAE: Self-Supervised Masked Graph Autoencoders},
  author={Hou, Zhenyu and Liu, Xiao and Cen, Yukuo and Dong, Yuxiao and Yang, Hongxia and Wang, Chunjie and Tang, Jie},
  booktitle={Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
  pages={594--604},
  year={2022}
}

graphmae's People

Contributors

think2try avatar xiao9905 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphmae's Issues

请问一般适用多大的图?

我有个百万级别节点的图,看了下feat那块,如果每个节点feat都是768维的话,光feat部分就40G了,请问这个时候模型还适用吗?我仅有11G的卡。

你好,我运行以下命令,无法复现论文的结果,能帮我解释下为什么吗?非常感谢

你好,我运行以下命令,无法复现论文的结果,能帮我解释下为什么吗?非常感谢
ogb=1.3.6
pytorch=1.12.0
dgl-cuda11.3=0.9.1
python main_transductive.py --dataset cora --encoder gat --decoder gat --seed 0 --device 0

--- TestAcc: 0.5650, early-stopping-TestAcc: 0.5650, Best ValAcc: 0.5840 in epoch 29 ---

final_acc: 0.5650±0.0000
early-stopping_acc: 0.5650±0.0000

Graph Dataloader

train_idx = torch.arange(len(graphs))
train_sampler = SubsetRandomSampler(train_idx)
    
train_loader = GraphDataLoader(graphs, sampler=train_sampler, collate_fn=collate_fn, batch_size=batch_size, pin_memory=True)
eval_loader = GraphDataLoader(graphs, collate_fn=collate_fn, batch_size=batch_size, shuffle=False)

I would like to ask the author why the two data loads are not shuffle. When LayerNorm and BanchNorm are used in the model, will there be any message leakage.

关于VGAE用于节点分类问题

貌似论文里只有链接预测的数据,我自己利用VGAE的链接预测预训练的模型,然后利用encoder在下游任务做节点分类预测效果相差很远,比如cora的准确率只有45%,但您的测试有71%,想问下作者的VGAE的节点分类的数据是怎么得出的

Question about transfer learning time cost.

Hi Zhenyu,
I am curious about the time cost of the pretraining in the transfer learning setting. I found that one epoch of pre-training with GraphMAE's strategy on 100k data seems to run very slowly (it is expected to take hours for Nvidia 3090), and there are differences in the running time of different batches. I don't think this is a normal training situation because s shallow GNN decoder for reconstruction should not cause much computation burden.

Kind regards,
reindeer

TypeError: 'NoneType' object is not callable

File "/data2/haleyan/code/GraphMAE-main/graphmae/models/gin.py", line 186, in __init__
    self.norms.append(create_norm(norm)(hidden_dim))
TypeError: 'NoneType' object is not callable

Why do I get such an error when running the graph classification dataset?

Dataset split is not the standard, like in GCN.

Dear authors,
I have noticed that you use DGL to load Cora/Citeseer/Pubmed dataset. But the corresponding data split is not the correct split used in GCN paper (as pointed out in old questions in MVGRL ). Therefore, it is not fair for the compared baselines (like GCN and GAT).
Moreover, even with the same DGL split setting (as pointed out in your baseline paper [11]), MVGRL[11] also beats yours, but you did not compare this one?
For example, MVGR get 86.8% node classification on Cora, while you only get 84.2?
Are there any mistakes?

inductive node classification

您好,我在运行Node Classification这一部分代码时,根据您论文中的设置,用Cora、Citeseer、Pubmed数据集进行 transductive learning,用PPI和Reddit数据集进行inductive learning,所得到的结果和您论文中的结果基本一致。我现在想在Cora、Citeseer、Pubmed数据集上进行inductive learning,于是我使用命令行sh scripts/run_inductive.sh cora/citeseer/pubmed 0 运行,但是得到的Node Classification的精度比较低(只有30%-40%),请问这是哪里出了问题,是我没有对数据集进行预处理吗

when I download the dataset of IMDB-BINARY,allways failed.So I download local and put it in the package,it still download...and faild

I have the problem when I run the :python main_graph.py --dataset IMDB-BINARY --encoder gin --decoder gin --seed 0 --device 0

It does have the network,but download still failed, even I put the data into the .dgl package,and the error Just like this:

Downloading /home/ubuntu/.dgl/IMDB-BINARY.zip from https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip...
download failed, retrying, 4 attempts left

thanks for your apply

Position Encoding

out_x[token_nodes] += self.enc_mask_token

# ---- attribute reconstruction ----
        rep = self.encoder_to_decoder(enc_rep)

I would like to ask the author why the position encoding is added. The node position can be directly given by the edge relationship, and the edge relationship is not modified.
Also, what is the reason for going through a projective transformation before reconstruction?

关于使用gcn作为encoder的问题

作者您好,我试图在cora数据集上使用gcn作为encoder,但是node classification精度只有71%左右,能麻烦提供一下使用GCN的超参数吗

in edcoder.py line:254

if self._decoder_type == "mlp":
recon = self.decoder(rep)
the parameter of self.decoder in line 254 missing 1 required positional argument.

link prediction

你好,GraphMAE在论文中主要应用在分类任务上,为什么没有用到链路预测任务呢?通过对mask的节点特征重建,是不是也可以通过Z*Z.T的方式构建边呢

The hyper-parameters of transfer learning

Thanks for a outstanding work, transfer learning is a very useful setup in this paper. But the performance in the table seems to require appropriate hyperparameters, can you release the hyperparameters in the transfer learning setting? Thanks again for sharing

Transfer Learning

Hi and thanks for your work!
What is the difference between GraphMAE and the baseline model AttrMask in transfer learning?

Code about Reddit Dataset

Hi, thanks for releasing the code of your excellent work in Graph Neural Networks!

I got a question regarding the Reddit Dataset. Your results on Reddit presented in your paper shows promising result and fabulous generalizability of GraphMAE, so that I hope to reproduce that result myself. However, if I understand correctly, data loader for Reddit is not included. I assume there is some special accommodation on Reddit is needed to train the GraphMAE because Reddit has 23k nodes and 11m edges. Could you please kindly provide your code for processing Reddit Dataset? Thank you so much for your consideration.

The setting of loss_fn

Thanks for a outstanding work, loss_fn is a very useful setup in node classification. However, the default value of loss_fn (default="byol") in the setting seems to have no corresponding operation, but only "mse" and "cse" operations. Can you share the loss operation of "byol"? Thanks again for sharing.

Cannot Reproduce Reported Results with Code in PyG Branch

Hi, thanks for releasing the code of your excellent work in self-supervised GNNs!

I tried to reproduce the graph classification results myself. However, after I clone the source code in PyG branch, and I just run the run_graph.sh with use-config as True, the results were lower than the reported ones at 1-2 precent points among different datasets. For example, the accuracy I reproduced on MUTAG was 86.57±1.21, which reported in main branch was 88.19±1.26.

The environment settings of my experiments are:
torch-1.8.1+cu101,
python-3.7,
torch_geometric-2.20;

Could you give some instructions for why this may happen and how can I reproduce the reported performance?
Thanks a lot!

Encoding Mask Token

In your code, the enc_mask_token is added when a node is masked. Then, that token is implicitly removed by re-masking it with zeros at the same indices in the middle before the decoder.

  1. During the training, is it remain all past values or be randomly initialised?

  2. Can you explain the behaviour of this token?

Thank you.

A question about node classification

good job! But I have a question:
Some nodes are not visible when training the linear classifier because the pre-training takes these masks out,
even when the nodes are classified, the network weights do not take these mask nodes into account,
so why is it still good for node classification?
Can you explain this problem?

关于NCI1的结果

作者您好,我在完全使用您的代码和超参数的情况下在NCI1上的准确率只有68%左右,和论文里的80%相差巨大,其他的所有结果倒是都基本相似,想问下关于NCI1的config设置是否哪里公布的有问题呢?

实验结果的一些问题

您好作者,我在cora数据集上直接运行了您给出的sh脚本,但是发现node classfication任务的准确率只有80%左右。请问是sh脚本中参数与您的实验参数不同吗?
期待您的回复,感谢!!!

Have you tested GraphMAE on link prediction tasks?

Hi! According to the paper, you argue that the structure information may be over-emphasized. So you choose to reconstruct node attributes rather than links. But I'm quite interested in how GraphMAE performs on link prediction tasks. Intuitively, I think the node attribute masking method may perform worse than link masking method. So I would like to ask whether you have done any experiments on link prediction tasks?

How to use for graph classification

So far, all that I've found the method for Graph Clustering is actually for node clustering or not a fully unsupervised learning method. It means that they eventually need their label to train at the downstream task.

For example, it is;

  • pretraining some datasets with the unlabeled using the contrastive learning method or others. (No output channel or number of classes to be clustered is required)
  • then finetuning the dataset with its labels using the supervised learning method.

I'm finding a fully unsupervised learning method for Graph-Level Clustering/Classification. Or, I'd like to know how to employ your model in a fully unsupervised learning method to classify lots of graph data that cannot be labeled by human efforts.

Question. Could you show me a little direction to use the embedding from your model for the Graph-Level Classification/Clustering without its labels such as KMeans at the downstream task?

Pls provide the detailed hyperparameters

Dear authors,

Awesome job you have done here!
I'm interested in your work so I'm here with a request for the detailed specifications of GraphMAE, including encoder types & hyper-parameters of linear probing for each dataset (e.g. lr_f, weight_decay_f and max_epoch_f of node_classification_evaluation for each dataset).
BTW I think it's kinda weird to use PReLU for every dataset and downstream task as the paper claimed. So I guess there are some unclarified configurations.
I'd be grateful if you provide them ❤️

Question about dataset of transfer learning

The dataset Chem (Bace, Bbbp, etc.) used 1.0x version of PyG to produce, but you use 2.0.3 torch_geometric. Then the dataset downloaded from the original link can't be used directly. How did you solve this problem?
image

About random seed settings

Thank you for your wonderful work. But I have a little question about the setting of seeds in the experiments. From the provided running scripts, it seems that {0, 1, ..., 19} and {0, 1, ..., 4} are adopted in the transductive and inductive node classification experiments, respectively. Could you provide some clarity on the random seed settings used in the experiments?
Thanks

Confusion about transfer learning

I'm working on your transfer learning part, but there are a few places to ask the authors for advice. as follows:

node_attr_label = batch.node_attr_label
masked_node_indices = batch.masked_atom_indices
pred_node = dec_pred_atoms(node_rep, batch.edge_index, batch.edge_attr, masked_node_indices)
if loss_fn == "sce":
   loss = criterion(node_attr_label, pred_node[masked_node_indices])

Why is the restored feature an atomic type instead of a node feature? The features of all atoms are 2-dimensional, the first dimension is the atom type, and the second dimension is the chirality, so is it meaningless to restore such node features?

about the CCA-SSG implementation

您好,恭喜您发表了一篇优秀的工作,对我的启发很大~

   有一个问题想向您咨询,我在CCA-SSG的复现上遇到了一些问题,主要是在 arxiv数据集上没有搜索到合适的参数。

   在论文中没有涉及到对应baseline的超参数描述,不知能否提供你们复现所用的超参数以供参考?

非常感谢🙏

about the bgrl implementation

非常感谢你们杰出的工作,对我的启发很大~

但是我在bgrl的复现上遇到了一些问题,主要是在Cora,CiteSeer, PubMed和 Ogbn-arxiv四个数据集上并没有搜索到合适的augmentation参数,即边和输入特征的drop probability。

在论文中没有涉及到对应baseline的超参数描述,不知能否提供你们复现所用的超参数以供参考?

非常感谢🙏

self.enc_mask_token = nn.Parameter(torch.zeros(1, in_dim))

你好,我在看源码的时候看到
self.enc_mask_token = nn.Parameter(torch.zeros(1, in_dim))
....
out_x[token_nodes] += self.enc_mask_token

想咨询一下为什么所有的token_nodes在重建的时候全是共享的呢?这里self.enc_mask_token是不是应该是nn.Parameter(torch.zeros(num_token_nodes, in_dim))??

Academic Consultation

Dear Authors,

Congratulations on your newly published KDD 2022 paper: GraphMAE: Self-Supervised Masked Graph Autoencoders. We found its idea interesting and inspiring, and would like to do further research over it.

Recently we run this public source code on NCI1 dataset for graph classification and the reproduced result is merely 76.9%, which exists a large gap compared to the reported result (80.4%) in the main paper. Would you please share the complete parameter setting of NCI1 with us for academic research?

Best regards

The inconsistent performance on COLLAB dataset

Thanks for your fabulous codes, but I meet some problems while reproducing. After running 5 runs(sh scripts/run_graph.sh COLLAB 0 using seed: 0,1,2,3,4), the performance on COLLAB dataset is 78.78±0.46 which remains a large margin with the reported value(80.32±0.46).
I check the model configuration in the Appendix and find the hidden size of GIN is 512(but 256 in the config.yaml). After changing the hidden size to 512, the performance(78.70±0.32) is still inferior to the paper.
Is there anything I missing? hope for your reply.

你好,请问你们遇见过这个问题吗

发生异常: RuntimeError
mat1 and mat2 shapes cannot be multiplied (2708x7 and 256x256)
File "/home/youshen/work/GraphMAE/graphmae/models/edcoder.py", line 246, in mask_attr_prediction
rep = self.encoder_to_decoder(enc_rep)
File "/home/youshen/work/GraphMAE/graphmae/models/edcoder.py", line 229, in forward
loss = self.mask_attr_prediction(g, x)
File "/home/youshen/work/GraphMAE/main_transductive.py", line 32, in pretrain
loss, loss_dict = model(graph, x)
File "/home/youshen/work/GraphMAE/main_transductive.py", line 117, in main
model = pretrain(model, graph, x, optimizer, max_epoch, device, scheduler, num_classes, lr_f, weight_decay_f, max_epoch_f, linear_prob, logger)
File "/home/youshen/work/GraphMAE/main_transductive.py", line 146, in
main(args)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2708x7 and 256x256)

in data_util.py line 139

in data_util.py
line 139 : here has an if conditon, the string "attr" is not in graph.ndata:
But in tu.py line 362, the self.attr_dict, the node feature name is 'node_attr' , not "attr".
whether it is my fault?

keyword type error in GraphMAE torch_geometric

Hi, I just get a keyword error when running the code based on torch_geometric. The keyword "residul" seems to be type error in the gat.py, from line 56 to 70. Please double check it out.

RuntimeError: Class values must be smaller than num_classes.

Hi dear author,

May I get your help for fixing the issue for running Chem? Chem is using torch_geometric instead of DGI. I am getting the below error for all datasets.

/chem/util.py", line 248, in call
atom_type = F.one_hot(data.mask_node_label[:, 0], num_classes=self.num_atom_type).float()
RuntimeError: Class values must be smaller than num_classes.

Thanks

how to set hyperparameters(e.g. num_hidden, num_heads)

你好!请问如何设置数据集的超参数,num_hidden和num_heads等。在config.yaml提供了dataset的超参数设置,但是我发现pubmed dataset中node feature的长度是500,但是该数据集的num_hidden是1024,如何理解500--> 1024的设置呢?什么情况下需要将映射到一个更高维度的空间呢?

Find implement issue in PyG version

Hi, I have encountered an error while running the PyG version of GraphMAE using the command python main_transductive.py --dataset cora --encoder gat --decoder gat --seed 0 --device 0. The error message states:

File "../GraphMAE/graphmae/evaluation.py", line 52, in linear_probing_for_transductive_node_classiifcation     
out = model(graph, x) 
TypeError: dropout(): argument 'input' (position 1) must be Tensor, not Data 

While it works fine by sh scripts/run_transductive.sh cora 0.
I think it is because the args setting linear_prob cause the model mis-assign in linear_probing_for_transductive_node_classiifcation.

reproduction issue

Hi,

Thanks for the great work but I am having trouble reproducing the results.

I ran (as you provided) python main_transductive.py --dataset cora --encoder gat --decoder gat --seed 0 --device 0 and got the following:

# Epoch 199: train_loss: 0.6322:  98%|███████████████████████████████████████████████████████████████████▉ | 197/200 [00:07<00:00, 38.22it/s]│    702 systemd-r  20   0 25792 14192  9632 S  0.0  0.0  0:02.36 /lib/systemd/systemd-resolved
# IGNORE: --- TestAcc: 0.4270, early-stopping-TestAcc: 0.4400, Best ValAcc: 0.4540 in epoch 9 ---                                            │    703 systemd-t  20   0 89376  6684  5824 S  0.0  0.0  0:00.29 /lib/systemd/systemd-timesyncd
# Epoch 199: train_loss: 0.6322: 100%|█████████████████████████████████████████████████████████████████████| 200/200 [00:07<00:00, 25.39it/s]│    708 root       20   0  5124   176     4 S  0.0  0.0  0:00.00 /usr/sbin/blkmapd
num parameters for finetuning: 435721                                                                                                        │    709 root       20   0  3104  2052  1884 S  0.0  0.0  0:00.00 /usr/sbin/rpc.idmapd
# Epoch: 29, train_loss: 1.5159, val_loss: 1.5887, val_acc:0.592, test_loss: 1.5832, test_acc: 0.5910: 100%|█| 30/30 [00:00<00:00, 50.10it/s]│    710 root       20   0  5460  2632  2208 S  0.0  0.0  0:00.00 /usr/sbin/nfsdcld
--- TestAcc: 0.5910, early-stopping-TestAcc: 0.6150, Best ValAcc: 0.6020 in epoch 25 ---                                                     │    711 systemd-t  20   0 89376  6684  5824 S  0.0  0.0  0:00.00 /lib/systemd/systemd-timesyncd
# final_acc: 0.5910±0.0000                                                                                                                   │    713 root       20   0  243M  8480  7212 S  0.0  0.0  0:02.24 /usr/libexec/accounts-daemon
# early-stopping_acc: 0.6150±0.0000

My env:

torch=2.0.1+cu117
dgl=1.1.0+cu118
ogb=1.3.6

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.