Giter VIP home page Giter VIP logo

docunet's People

Contributors

njcx-ai avatar timelordri avatar zxlzr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

docunet's Issues

数据集咋下载

您好,想请教下,我应该去哪下载数据集,并且对于数据集应该进行哪些操作?
我运行命令:bash scripts/run_cdr.sh
FileNotFoundError: [Errno 2] No such file or directory: './dataset/cdr/train_filter.data'

TypeError: ElementWiseMatrixAttention.forward: `matrix_1` is not present.

I tried to implement the DocuNet model in colab. while running the model, I got the following error:

The following command I ran with DocRED dataset:
!bash scripts/run_docred.sh --transformer-type roberta
!bash scripts/run_docred.sh --transformer-type bert

Both the commands, throws same error.
The error was follows:
Traceback (most recent call last):
File "./train_balanceloss.py", line 12, in
from model_balanceloss import DocREModel
File "/content/DocuNet/model_balanceloss.py", line 8, in
from element_wise import ElementWiseMatrixAttention
File "/content/DocuNet/element_wise.py", line 8, in
class ElementWiseMatrixAttention(MatrixAttention):
File "/content/DocuNet/element_wise.py", line 22, in ElementWiseMatrixAttention
def forward(self, tensor_1: torch.Tensor, tensor_2: torch.Tensor) -> torch.Tensor:
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 88, in overrides
return _overrides(method, check_signature, check_at_runtime)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 114, in _overrides
_validate_method(method, super_class, check_signature)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 135, in _validate_method
ensure_signature_is_compatible(super_method, method, is_static)
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 95, in ensure_signature_is_compatible
super_sig, sub_sig, super_type_hints, sub_type_hints, is_static, method_name
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 136, in ensure_all_kwargs_defined_in_sub
raise TypeError(f"{method_name}: {name} is not present.")
TypeError: ElementWiseMatrixAttention.forward: matrix_1 is not present.
Traceback (most recent call last):
File "./train_balanceloss.py", line 12, in
from model_balanceloss import DocREModel
File "/content/DocuNet/model_balanceloss.py", line 8, in
from element_wise import ElementWiseMatrixAttention
File "/content/DocuNet/element_wise.py", line 8, in
class ElementWiseMatrixAttention(MatrixAttention):
File "/content/DocuNet/element_wise.py", line 22, in ElementWiseMatrixAttention
def forward(self, tensor_1: torch.Tensor, tensor_2: torch.Tensor) -> torch.Tensor:
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 88, in overrides
return _overrides(method, check_signature, check_at_runtime)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 114, in _overrides
_validate_method(method, super_class, check_signature)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 135, in _validate_method
ensure_signature_is_compatible(super_method, method, is_static)
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 95, in ensure_signature_is_compatible
super_sig, sub_sig, super_type_hints, sub_type_hints, is_static, method_name
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 136, in ensure_all_kwargs_defined_in_sub
raise TypeError(f"{method_name}: {name} is not present.")
TypeError: ElementWiseMatrixAttention.forward: matrix_1 is not present.
Traceback (most recent call last):
File "./train_balanceloss.py", line 12, in
from model_balanceloss import DocREModel
File "/content/DocuNet/model_balanceloss.py", line 8, in
from element_wise import ElementWiseMatrixAttention
File "/content/DocuNet/element_wise.py", line 8, in
class ElementWiseMatrixAttention(MatrixAttention):
File "/content/DocuNet/element_wise.py", line 22, in ElementWiseMatrixAttention
def forward(self, tensor_1: torch.Tensor, tensor_2: torch.Tensor) -> torch.Tensor:
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 88, in overrides
return _overrides(method, check_signature, check_at_runtime)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 114, in _overrides
_validate_method(method, super_class, check_signature)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 135, in _validate_method
ensure_signature_is_compatible(super_method, method, is_static)
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 95, in ensure_signature_is_compatible
super_sig, sub_sig, super_type_hints, sub_type_hints, is_static, method_name
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 136, in ensure_all_kwargs_defined_in_sub
raise TypeError(f"{method_name}: {name} is not present.")
TypeError: ElementWiseMatrixAttention.forward: matrix_1 is not present.

I couldn't able to find the error. How to solve this error? can you help me out from this error?
Thank you!

Question about Roberta-large results

Hello author, we are very interested in your work, but we have not changed the code, and we cannot reproduce your results on robert-larger, and hope to get your help.
The results we have reproduced so far, the F1 on the development set is 56%.
Thanks.

I am unable to obtain the results you presented.

Running this code with the dataset DocRED, I got the result:
| epoch 0 | step 100 | min/b 0.11 | lr [5.45950864422202e-07, 1.8198362147406733e-06, 5.459508644222019e-06] | train loss 5290.696
| epoch 0 | step 200 | min/b 0.05 | lr [1.091901728844404e-06, 3.6396724294813467e-06, 1.0919017288444038e-05] | train loss 5288.723
| epoch 0 | step 300 | min/b 0.05 | lr [1.637852593266606e-06, 5.4595086442220205e-06, 1.6378525932666058e-05] | train loss 5289.938
| epoch 0 | step 400 | min/b 0.05 | lr [2.183803457688808e-06, 7.279344858962693e-06, 2.1838034576888075e-05] | train loss 5287.533
| epoch 0 | step 500 | min/b 0.05 | lr [2.72975432211101e-06, 9.099181073703366e-06, 2.7297543221110096e-05] | train loss 5288.655
| epoch 0 | step 600 | min/b 0.05 | lr [3.275705186533212e-06, 1.0919017288444041e-05, 3.2757051865332116e-05] | train loss 5290.393
| epoch 0 | step 700 | min/b 0.05 | lr [3.821656050955414e-06, 1.2738853503184714e-05, 3.8216560509554137e-05] | train loss 5288.530
| epoch 0 | step 800 | min/b 0.05 | lr [4.367606915377616e-06, 1.4558689717925387e-05, 4.367606915377615e-05] | train loss 5288.781
| epoch 0 | step 900 | min/b 0.05 | lr [4.913557779799819e-06, 1.637852593266606e-05, 4.913557779799818e-05] | train loss 5288.198
| epoch 0 | step 1000 | min/b 0.05 | lr [5.45950864422202e-06, 1.8198362147406733e-05, 5.459508644222019e-05] | train loss 5289.798
| epoch 0 | step 1100 | min/b 0.05 | lr [6.005459508644222e-06, 2.0018198362147407e-05, 6.005459508644222e-05] | train loss 5289.863
| epoch 0 | step 1200 | min/b 0.05 | lr [6.551410373066424e-06, 2.1838034576888082e-05, 6.551410373066423e-05] | train loss 5288.075
| epoch 0 | step 1300 | min/b 0.06 | lr [7.097361237488627e-06, 2.3657870791628757e-05, 7.097361237488626e-05] | train loss 5288.787
| epoch 0 | step 1400 | min/b 0.06 | lr [7.643312101910828e-06, 2.5477707006369428e-05, 7.643312101910827e-05] | train loss 5289.309
| epoch 0 | step 1500 | min/b 0.06 | lr [8.189262966333029e-06, 2.72975432211101e-05, 8.189262966333029e-05] | train loss 5289.554
| epoch 0 | step 1600 | min/b 0.06 | lr [8.735213830755232e-06, 2.9117379435850774e-05, 8.73521383075523e-05] | train loss 5289.977
| epoch 0 | step 1700 | min/b 0.06 | lr [9.281164695177434e-06, 3.093721565059145e-05, 9.281164695177434e-05] | train loss 5288.903
| epoch 0 | step 1800 | min/b 0.06 | lr [9.827115559599637e-06, 3.275705186533212e-05, 9.827115559599635e-05] | train loss 5288.367
| epoch 0 | step 1900 | min/b 0.07 | lr [1.0373066424021838e-05, 3.4576888080072794e-05, 0.00010373066424021837] | train loss 5287.382
| epoch 0 | step 2000 | min/b 0.07 | lr [1.091901728844404e-05, 3.6396724294813465e-05, 0.00010919017288444038] | train loss 5290.210
| epoch 0 | step 2100 | min/b 0.06 | lr [1.1464968152866244e-05, 3.821656050955414e-05, 0.00011464968152866242] | train loss 5287.919
| epoch 0 | step 2200 | min/b 0.06 | lr [1.2010919017288445e-05, 4.0036396724294815e-05, 0.00012010919017288444] | train loss 5289.426
| epoch 0 | step 2300 | min/b 0.06 | lr [1.2556869881710646e-05, 4.1856232939035486e-05, 0.00012556869881710645] | train loss 5289.294
| epoch 0 | step 2400 | min/b 0.06 | lr [1.3102820746132848e-05, 4.3676069153776164e-05, 0.00013102820746132846] | train loss 5291.173
| epoch 0 | step 2500 | min/b 0.06 | lr [1.364877161055505e-05, 4.5495905368516835e-05, 0.00013648771610555048] | train loss 5289.310
| epoch 0 | step 2600 | min/b 0.06 | lr [1.4194722474977254e-05, 4.731574158325751e-05, 0.00014194722474977252] | train loss 5289.745
| epoch 0 | step 2700 | min/b 0.06 | lr [1.4740673339399455e-05, 4.9135577797998184e-05, 0.00014740673339399453] | train loss 5289.868
| epoch 0 | step 2800 | min/b 0.06 | lr [1.5286624203821656e-05, 5.0955414012738855e-05, 0.00015286624203821655] | train loss 5289.504
| epoch 0 | step 2900 | min/b 0.06 | lr [1.583257506824386e-05, 5.2775250227479533e-05, 0.0001583257506824386] | train loss 5290.610
| epoch 0 | step 3000 | min/b 0.06 | lr [1.6378525932666058e-05, 5.45950864422202e-05, 0.00016378525932666057] | train loss 5289.182

| epoch 0 | time: 70.27s | dev_result:{'dev_F1': 0.07251287103460864, 'dev_F1_ign': 0.060782931255661345, 'dev_re_p': 0.03663268949627104, 'dev_re_r': 3.5299845816765396, 'dev_average_loss': 5.324710902690888}

| epoch 0 | best_f1:0.0007251287103460864
............
| epoch 29 | step 44300 | min/b 0.06 | lr [1.0317423432634662e-06, 3.4391411442115537e-06, 1.031742343263466e-05] | train loss 5287.676
| epoch 29 | step 44400 | min/b 0.13 | lr [9.620300227726913e-07, 3.206766742575638e-06, 9.620300227726912e-06] | train loss 5289.456
| epoch 29 | step 44500 | min/b 0.14 | lr [8.923177022819166e-07, 2.974392340939722e-06, 8.923177022819166e-06] | train loss 5288.328
| epoch 29 | step 44600 | min/b 0.15 | lr [8.226053817911419e-07, 2.7420179393038063e-06, 8.226053817911418e-06] | train loss 5288.538
| epoch 29 | step 44700 | min/b 0.15 | lr [7.528930613003671e-07, 2.5096435376678905e-06, 7.52893061300367e-06] | train loss 5288.066
| epoch 29 | step 44800 | min/b 0.15 | lr [6.831807408095924e-07, 2.2772691360319747e-06, 6.831807408095923e-06] | train loss 5287.657
| epoch 29 | step 44900 | min/b 0.15 | lr [6.134684203188176e-07, 2.044894734396059e-06, 6.1346842031881754e-06] | train loss 5288.435
| epoch 29 | step 45000 | min/b 0.15 | lr [5.43756099828043e-07, 1.8125203327601433e-06, 5.437560998280429e-06] | train loss 5288.132
| epoch 29 | step 45100 | min/b 0.16 | lr [4.7404377933726816e-07, 1.5801459311242273e-06, 4.7404377933726815e-06] | train loss 5288.229
| epoch 29 | step 45200 | min/b 0.15 | lr [4.0433145884649346e-07, 1.3477715294883117e-06, 4.0433145884649346e-06] | train loss 5287.576
| epoch 29 | step 45300 | min/b 0.15 | lr [3.3461913835571875e-07, 1.1153971278523957e-06, 3.346191383557187e-06] | train loss 5288.117
| epoch 29 | step 45400 | min/b 0.16 | lr [2.64906817864944e-07, 8.8302272621648e-07, 2.64906817864944e-06] | train loss 5288.126
| epoch 29 | step 45500 | min/b 0.15 | lr [1.9519449737416926e-07, 6.506483245805642e-07, 1.9519449737416924e-06] | train loss 5288.336
| epoch 29 | step 45600 | min/b 0.16 | lr [1.2548217688339452e-07, 4.182739229446484e-07, 1.254821768833945e-06] | train loss 5288.346
| epoch 29 | step 45700 | min/b 0.15 | lr [5.5769856392619785e-08, 1.8589952130873263e-07, 5.576985639261979e-07] | train loss 5286.913

| epoch 29 | time: 51.71s | dev_result:{'dev_F1': 0.06700784829423147, 'dev_F1_ign': 0.05671531404503352, 'dev_re_p': 0.033853348984865014, 'dev_re_r': 3.245962833725554, 'dev_average_loss': 5.296513883590698}

How can I solve this problem?

The BC5CDR dataset result

Hello!
When I run your code with cdr dataset,get the result like this
image
Do you know why it's so high compared to your paper result and previous works!

Looking forward for your reply!
Thanks~

Did someone try run on multi-gpu? Got an error in the multi-gpu setting.

Here is the log:

Let's use 2 GPUs!
Total steps: 763
Warmup steps: 45
Traceback (most recent call last):
File "./train_balanceloss.py", line 325, in
main()
File "./train_balanceloss.py", line 314, in main
train(args, model, train_features, dev_features, test_features)
File "./train_balanceloss.py", line 137, in train
finetune(train_features, optimizer, args.num_train_epochs, num_steps, model)
File "./train_balanceloss.py", line 62, in finetune
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/opt/conda/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
IndexError: Caught IndexError in replica 0 on device 0.

Original Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ssd3/chunxu/docunet_predict/model_balanceloss.py", line 163, in forward
hs, ts, entity_embs, entity_as = self.get_hrt(sequence_output, attention, entity_pos, hts)
File "/home/ssd3/chunxu/docunet_predict/model_balanceloss.py", line 67, in get_hrt
e_emb.append(sequence_output[i, start + offset])
IndexError: index 2 is out of bounds for dimension 0 with size 2

Trained DocRED Weights?

Hello!

Thank you for the awesome work, I enjoyed reading your about and your approach.

Is it possible to share the trained weights for DocRED?

To be honest, it will save me a ton of time haha. I'm writing a paper and would like to use the trained DocRED model on a few examples. Replicating this work is proving difficult when I don't have access to enough GPU memory.

the result of roberta-large

Why was bert able to converge to 61 and Robert could only converge to about 47, using the original code and running on 3090

关于损失函数的一些疑问

image
看到模型里面self.loss_fnt=ATLoss()
但是这个实际上在上面引入时是balanced_loss,想问一下这里是试过之后发现相较于ATLoss,交叉熵在某些方面更好一些是吗

Parameter problem

Hello,thank you for your outstanding work. In this paper, can you tell me the value of the parameter D‘, this is not introduced in your paper.

关于context-base strategy的疑问

我想请问一下,当文本过长的时候(超过512的时候)如何计算$A_i^S$。(Entity对文本的attention_weight)

  • 直接将超过长度的部分attention_weight设为0
  • 只计算512长度部分的attention_weight(将长句子分成每个部分都是512的长度。“有重叠部分"),然后只算Entity对当前截断句子的attention_weight。

感谢您的答疑!:)

More recent trained DocRED Weights?

Hello!

Thank you for the awesome repository.

Is it possible to share an updated version of the trained weights for DocRED or the result.json file? The trained weights shared on issue #9 don't predict anything on the official evaluation.

I am doing a research thesis and not having to train the model would save me a lot of time. I'm performing this task on a model I developed but my model can only predict positive labels (non-Na) and the result.json file generated by this model would help me filter the Na examples. My ideia would be for my model to predict the pairs of labels your model predicted. As I don't have access to enough GPU memory I am not able to train your model from scratch.

Thank you very much.

Question about the size of the feature map

Hi,
I notice that in your paper it is said that "... N is the largest number of entities, counted from all the dataset samples". However, it seems like that in your DocRED experiment the size N is fixed to args.max_height 42. So I wonder what does N stand for?

对预测矩阵的疑问

image
如上所示,是论文中Figure 5的右半部分,请问 DocUnet 这种方法最后是预测实体对间的关系是只能有一种吗?

ModuleNotFoundError: No module named 'overrides'

When I am trying to run the script "run_docred.sh", I got the this error. I unable to solve this issue. like no file named as overrides.
I checked the other python files, no function or classes named overrides found. the error:

  File "./train_balanceloss.py", line 12, in <module>
    from model_balanceloss import DocREModel
  File "/content/DocuNet/model_balanceloss.py", line 8, in <module>
    from element_wise import ElementWiseMatrixAttention
  File "/content/DocuNet/element_wise.py", line 2, in <module>
    from overrides import overrides
ModuleNotFoundError: No module named 'overrides'    ```


How to solve this issue?
Thank you

给的shell脚本用windows改过吗?(Resolved)

报错:

scripts/run_docred.sh: line 3: $'\r': command not found
scripts/run_docred.sh: line 14: syntax error near unexpected token `$'do\r''
'cripts/run_docred.sh: line 14: `    do

这个run_docred.sh 脚本是不是用windows改过导致 linux 下跑不通啊

Hyper-parameters to use?

Hi! I'm quite interested in your paper but I have trouble reproducing your results. When I run the run_docred.sh file, The bert-base model works fine (by reaching 61.54/59.56 F1/Ign F1, around mean-1.5sd), but my Roberta model can only get 63.30/61.40 F1/Ign F1.

I notice some hyper-parameters listed in the script and the supplementary material is inconsistent. For example, the docred script uses bs=accum=2 but the supplementary says you use bs=5, accum=1. The supplementary says the weight decay is set to 5e-4, but I didn't see it in the code. Is this the reason why I couldn't reproduce the results?

Plus, could you upload your utils_sample file? It seems the file is missing. I plug in the one in ATLOP and that works, but I'm not sure if you write it in the same way.

Thanks!

Question about get_hrt

I did not know why append the same e_att so many times in the entity_atts
for _ in range(self.min_height-entity_num-1):
entity_atts.append(e_att)

Thanks for your help!

multiple GPUs

Could you please provide the hyperparameters for training on multiple GPUs? I still cannot reproduce your reported results using roberta, similar to
@Veronicium in #5 (comment)_

No train_bio.py

Both in CDR and GDA running bash

#! /bin/bash
export CUDA_VISIBLE_DEVICES=0

if true; then
type=context-based
bs=4
bl=3e-5
uls=(4e-4)
accum=1
for ul in ${uls[@]}
do
python -u  ./train_bio.py --data_dir ./dataset/cdr \
  --max_height 35 \
  --channel_type $type \
  --bert_lr $bl \
  --transformer_type bert \
  --model_name_or_path allenai/scibert_scivocab_cased \
  --train_file train.data \
  --dev_file dev.data \
  --test_file test.data \
  --train_batch_size $bs \
  --test_batch_size $bs \
  --gradient_accumulation_steps $accum \
  --num_labels 1 \
  --learning_rate $ul \
  --max_grad_norm 1.0 \
  --warmup_ratio 0.06 \
  --num_train_epochs 30 \
  --seed 111 \
  --num_class 2 \
  --save_path ./checkpoint/cdr/train_scibert-lr${bl}_accum${accum}_unet-lr${ul}_bs${bs}.pt \
  --log_dir ./logs/cdr/train_scibert-lr${bl}_accum${accum}_unet-lr${ul}_bs${bs}.log
done
fi

下载代码

您好,想问问一下,代码哪里可以下载呢?
还有以下几个问题想请教一下:
1、做文档级关系抽取,句子中会用到指代词,比如 they,it,he,this 等等,这些指代词需要做替换吗?还是将指带关系-coreference,也放入到文档级关系的识别过程中?
2、做文档级关系抽取, define a N × N matrix Y,这个矩阵Y 的 行是训练集中所有的实体吗,还是每一篇文档构建一个矩阵Y?
3、在实验设置中,提到了“We set the matrix size N = 42” 这个矩阵是指的第二条的矩阵Y吗?
这个42指的是什么呢?怎样获得的呢?

数据集

你好,docred数据集如何获取

结果不一致

hello,为什么我跑cdr数据集,出来的f1值是85左右,然后论文给出的是76.3

关于分割区域的一点疑问

image
As it shows in the Figure 2, the segmentation area in the entitylevel relation matrix refers to the co-occurrence of relations between entity pairs.

首先图二中的红框是想表示什么意思?分割区域是连续的,论文中说分割区域是指实体级关系矩阵中关系的共现,也就是指图2中的一个格子(像素点)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.