Giter VIP home page Giter VIP logo

docred's Introduction

DocRED

Dataset and code for baselines for DocRED: A Large-Scale Document-Level Relation Extraction Dataset

Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:

  • DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.
  • DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.
  • Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.

Codalab

If you are interested in our dataset, you are welcome to join in the Codalab competition at DocRED

Cite

If you use the dataset or the code, please cite this paper:

@inproceedings{yao2019DocRED,
  title={{DocRED}: A Large-Scale Document-Level Relation Extraction Dataset},
  author={Yao, Yuan and Ye, Deming and Li, Peng and Han, Xu and Lin, Yankai and Liu, Zhenghao and Liu, Zhiyuan and Huang, Lixin and Zhou, Jie and Sun, Maosong},
  booktitle={Proceedings of ACL 2019},
  year={2019}
}

docred's People

Contributors

thucsthanxu13 avatar yaoyuanthu avatar yedeming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docred's Issues

Evaluation script

Hi, I have a question about the test function, under Config.py file, line 631. When appending to test result array, you enumerate all the possible combinations of (h_idx, t_idx, r), and then you use it as the "correct", in line 665, to calculate the F1 score. But from my understanding, in this way, the "correct" variable is always the same, which is just the total number of "not NA relations" in the test file. Could you explain that a little bit?

dev_dev_index.json file is missing

Where should I get the dev_dev_index.json file to successfully run the evidence extraction script? It seems like the file is not generated by the gen_data.py script. Thanks!

Entity ID

Can you provide entity linking ID in WikiData for every entity? Entity ID is necessary for inter-document information aggregation. Thank you!

Dimensions in vec.npy

Hi, I was running the BiLSTM baselines as described in the README file, and I got an error:

Traceback (most recent call last):
File "train.py", line 36, in
con.load_test_data()
File "/kevin.huang/DocRED/code/config/Config.py", line 158, in load_test_data
self.data_word_vec = np.load(os.path.join(self.data_path, 'vec.npy'))
File "/kevin.huang/anaconda3/lib/python3.6/site-packages/numpy/lib/npyio.py", line 440, in load
pickle_kwargs=pickle_kwargs)
File "/kevin.huang/anaconda3/lib/python3.6/site-packages/numpy/lib/format.py", line 734, in read_array
array.shape = shape
ValueError: cannot reshape array of size 3014533 into shape (194784,100)

Is the vec.npy file provided correct?

run error

当我运行的时候出现如下错误,请问如何解决?
CUDA_VISIBLE_DEVICES=0 python3 train.py --model_name BiLSTM --save_name checkpoint_BiLSTM --train_prefix dev_train --test_prefix dev_dev

Reading training data...
train dev_train
Finish reading
Reading testing data...
dev_dev
Finish reading
Traceback (most recent call last):
File "train.py", line 38, in
con.train(model[args.model_name], args.save_name)
File "/home/deep/daipeng/DocRED/code/config/Config.py", line 468, in train
for data in self.get_train_batch():
File "/home/deep/daipeng/DocRED/code/config/Config.py", line 268, in get_train_batch
relation_multi_label[i, j, r] = 1
TypeError: new(): invalid data type 'str'

How to align wikipedia text with wikidata

Could you please offer code for align wikipedia data with wikidata to get the relation? I'm struggling with using wikidata query service to find the relation between two known entities. Thank you!

A small question in gen_data.py

In gen_data.py, I wonder whether the for cycle between line 165-168 should be indented into the if block starting at line 159 (if j < max_length:) ? If j>=max_length, the assignment in line 168 "sen_char[i,j,c_idx] = char2id.get(k, char2id['UNK'])" will be out of range.

Generating data

Do you have the scripts for generating data directly from Wikipedia and Wikidata?

Would be great to see exactly how you processed the data and extend your approach.

Some coreference annotations are incorrect?

It seems that some coreference annotations are incorrect. I found that some mentions in the same vertex don't refer to the same entity:

3024-th Doc (title: Louis Hock) in train.json:

"vertexSet": [
	[
		{"sent_id": 4, "type": "PER", "pos": [123, 125], "name": "Elizabeth Sisco"}, 
		{"type": "PER", "sent_id": 3, "name": "Louis Hock", "pos": [96, 98]}, 
		{"type": "PER", "sent_id": 0, "name": "Louis Hock", "pos": [0, 2]}, 
		{"sent_id": 4, "type": "PER", "pos": [126, 128], "name": "David Avalos"}
	], 
	[
		{"sent_id": 0, "type": "TIME", "pos": [4, 5], "name": "1948"}
	], 
	[
		{"type": "LOC", "sent_id": 1, "name": "Whitney Museum of American Art", "pos": [41, 46]}, 
		{"type": "LOC", "sent_id": 0, "name": "American", "pos": [8, 9]}, 
		{"type": "LOC", "sent_id": 1, "name": "Museum of Modern Art", "pos": [48, 52]}, 
		{"type": "LOC", "sent_id": 1, "name": "New York", "pos": [53, 55]}, 
		{"type": "LOC", "sent_id": 1, "name": "Museum of Contemporary Art", "pos": [57, 61]}, 
		{"type": "LOC", "sent_id": 1, "name": "Los Angeles", "pos": [77, 79]}, 
		{"type": "LOC", "sent_id": 1, "name": "Los Angeles", "pos": [62, 64]}, 
		{"type": "LOC", "sent_id": 1, "name": "San Francisco Museum of Art", "pos": [66, 71]}, 
		{"type": "LOC", "sent_id": 1, "name": "Getty Museum", "pos": [74, 76]}
	], 
	[
		{"type": "ORG", "sent_id": 2, "name": "Video Data Bank", "pos": [92, 95]}
	], 
	[
		{"type": "ORG", "sent_id": 3, "name": "University of California - San Diego", "pos": [106, 112]}
	]
]}

"Elizabeth Sisco, Louis Hock, David Avalos" are different PER entities.
"Whitney Museum of American Art, American, ..." are different LOC entities.

about the relation type?

关于论文中提到的 the relation types are organized in a well defined hierarchy and taxonomy?请问一下有关系分类的文件么

提交错误

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmplppntt/run/program/evaluate.py", line 78, in
tmp = json.load(open(submission_answer_file))
IOError: [Errno 2] No such file or directory: '/tmp/codalab/tmplppntt/run/input/res/result.json

关于h_mapping * context_output 问题的请教

非常感谢您出色的工作
我想请问下 用 h_mapping * context_output 是由什么意义吗?
还有就是h_mapping与t_mapping的计算方式不是很理解
h_mapping[i, j, h['pos'][0]:h['pos'][1]] = 1.0 / len(hlist) / (h['pos'][1] - h['pos'][0])
这样计算的目的在于什么呢,通过之前您在issue中的回答,我没有看懂,谢谢!

How to visualize DocRED

Hello!

In what tool did you make visualizations of the dataset? I tried SpaCy NER visualization and it works fine, but it can't make entities and dependencies in the same time.

Some relation instances don't have support evidence label?

I found that some relation instances in train_annotated.json don't have support evidence label.

Here shows an example:

{
	"vertexSet": [[{"name": "John Samuel Bourque", "pos": [0, 3], ...
	"labels": [
		{"r": "P569", "h": 0, "t": 1, "evidence": [0]}, 
		{"r": "P570", "h": 0, "t": 2, "evidence": [0]}, 
		{"r": "P607", "h": 0, "t": 10, "evidence": [2]}, 
		{"r": "P19", "h": 0, "t": 6, "evidence": [2]}, 
		{"r": "P69", "h": 0, "t": 8, "evidence": [2]}, 
		{"r": "P1001", "h": 5, "t": 3, "evidence": []}, 
		{"r": "P131", "h": 5, "t": 3, "evidence": []}, 
		{"r": "P131", "h": 12, "t": 6, "evidence": []}, 
		{"r": "P159", "h": 12, "t": 6, "evidence": []}, 
		{"r": "P159", "h": 20, "t": 3, "evidence": []}], 
	"title": "John Samuel Bourque", 
	"sents": [["John", "Samuel", "Bourque", ... ]]
}

Relations

Hi,

Can you please add the relation details (their wikidata Id, name and description) here ? Sometimes these details changes with the wikidata version. A simple json file will be very helpful.

Regards
Tapas

内存泄露

test方法计算metric plot那段代码貌似有内存泄漏问题!随着test次数增加,内存激增。

None Relation

Is your data set includes any example for 'None' relation ?

About total_recall_ignore

In line713 of Config.py, when calculating ignore_F1, why do you use total_recall rather than total_recall_ignore to calculate recalls ? The variable total_recall_ignore seems not used in the program.
Is this correct: pr_x.append(float(correct-correct_in_train) / total_recall_ignore) ?

运行报错

运行下面命令的时候报错:
CUDA_VISIBLE_DEVICES=0 python3 train.py --model_name BiLSTM --save_name checkpoint_BiLSTM --train_prefix dev_train --test_prefix dev_dev

Traceback (most recent call last):
File "train.py", line 38, in
con.train(model[args.model_name], args.save_name)
File "/home/tanghengzhu/experiment/DocRED/code/config/Config.py", line 529, in train
f1, auc, pr_x, pr_y = self.test(model, model_name)
File "/home/tanghengzhu/experiment/DocRED/code/config/Config.py", line 629, in test
test_result_ignore.append(((h_idx, t_idx, r) in label, float(predict_re[i,j,r]), titles[i], self.id2rel[r], index, h_idx, t_idx, r))
KeyError: 1

另外我还有个问题是程序中加载数据的时候没有将relation由string类型变成int类型

Google Drive mirror?

I'm having trouble downloading from Tsinghua Cloud - downloads randomly stops. Any chance you could add another mirror? maybe in Google Drive?

codalab link

Can you please provide the codalab link to submit test results ?

Test set labels

Hi,

I am not sure if I have missed anything here. But it seems that the test set, i.e. "test.json" does not have any labels? I am a bit confused now. Am I supposed to split the testing set out by myself?

Thanks

F1 error

您好!
我们在使用readme中该代码训练模型的时候,出现f1值一直为0的情况,当然这也导致了训练模型无法保存的问题。我们完整的验证了多次,都会出现这个问题,想请教一下是什么问题导致的。
CUDA_VISIBLE_DEVICES=0 python3 train.py --model_name BiLSTM --save_name checkpoint_BiLSTM --train_prefix dev_train --test_prefix dev_dev

`-----------------------------------------------------------------------------------------
| epoch 195 | step 15050 | ms/b 11739.73 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.93 | tot acc: 1.00
| epoch 196 | step 15100 | ms/b 5017.40 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.93 | tot acc: 1.00
| epoch 196 | step 15150 | ms/b 5109.55 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.93 | tot acc: 1.00
| epoch 197 | step 15200 | ms/b 4961.75 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.92 | tot acc: 1.00
| epoch 198 | step 15250 | ms/b 5002.38 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.93 | tot acc: 1.00
| epoch 198 | step 15300 | ms/b 4956.10 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.93 | tot acc: 1.00
| epoch 199 | step 15350 | ms/b 5005.01 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.92 | tot acc: 1.00
| epoch 199 | step 15400 | ms/b 5073.45 | train loss 0.000 | NA acc: 1.00 | not NA acc: 0.92 | tot acc: 1.00

total_recall 12323
ALL : Theta 1.0000 | F1 0.0000 | AUC 0.0000
Ignore ma_f1 0.0000 | input_theta 1.0000 test_result F1 0.0000 | AUC 0.0000
| epoch 199 | time: 347.25s

Finish training
Best epoch = 0 | auc = 0.000000
Storing best result...`

data idx of head entity in vertexSet

Hi, thunlp

Could you explain what idx of head entity in vertexSet in the label field refers to?
The vertexSet in the real data seems have nested layers.
Thanks.

Data Format:
{
'title',
'sents': [
[word in sent 0],
[word in sent 1]
]
'vertexSet': [
[
{ 'name': mention_name,
'sent_id': mention in which sentence,
'pos': postion of mention in a sentence,
'type': NER_type}
{anthor mention}
],
[anthoer entity]
]
'labels': [
{
'h': idx of head entity in vertexSet,
't': idx of tail entity in vertexSet,
'r': relation,
'evidence': evidence sentences' id
}
]
}

{
"labels": [
{
"evidence": [
0
],
"h": 6,
"r": "P131",
"t": 7
},
{
"evidence": [
0,
2
],
"h": 6,
"r": "P131",
"t": 10
},
{
"evidence": [
0
],
"h": 6,
"r": "P17",
"t": 4
},
{
"evidence": [],
"h": 7,
"r": "P131",
"t": 10
},
{
"evidence": [
0
],
"h": 7,
"r": "P17",
"t": 4
},
{
"evidence": [
0
],
"h": 7,
"r": "P206",
"t": 5
},
{
"evidence": [
0,
2
],
"h": 10,
"r": "P17",
"t": 4
},
{
"evidence": [
2,
3
],
"h": 10,
"r": "P150",
"t": 11
},
{
"evidence": [
0,
3
],
"h": 13,
"r": "P17",
"t": 4
},
{
"evidence": [
0,
1,
2
],
"h": 2,
"r": "P131",
"t": 10
},
{
"evidence": [
0
],
"h": 2,
"r": "P17",
"t": 4
},
{
"evidence": [
0
],
"h": 2,
"r": "P206",
"t": 5
},
{
"evidence": [
0,
1,
2
],
"h": 3,
"r": "P131",
"t": 10
},
{
"evidence": [
0
],
"h": 3,
"r": "P17",
"t": 4
},
{
"evidence": [
0
],
"h": 3,
"r": "P206",
"t": 5
},
{
"evidence": [
0,
2
],
"h": 0,
"r": "P131",
"t": 10
},
{
"evidence": [
0
],
"h": 0,
"r": "P527",
"t": 2
},
{
"evidence": [
0
],
"h": 0,
"r": "P527",
"t": 3
},
{
"evidence": [
0
],
"h": 0,
"r": "P17",
"t": 4
},
{
"evidence": [
0
],
"h": 5,
"r": "P17",
"t": 4
},
{
"evidence": [
2,
3
],
"h": 11,
"r": "P131",
"t": 10
},
{
"evidence": [
0,
3
],
"h": 11,
"r": "P17",
"t": 4
},
{
"evidence": [
0,
2
],
"h": 8,
"r": "P17",
"t": 4
}
],
"sents": [
[
"The",
"Essingen",
"Islands",
"are",
"a",
"group",
"of",
"two",
"islands",
"—",
"Stora",
"Essingen",
"and",
"Lilla",
"Essingen",
"—",
"in",
"the",
"Swedish",
"lake",
"of",
"Mälaren",
",",
"located",
"southwest",
"of",
"Kungsholmen",
"in",
"Stockholm",
"."
],
[
"On",
"older",
"maps",
",",
"the",
"islands",
"are",
"called",
"Stora",
"Hessingen",
"and",
"Lilla",
"Hessingen",
"."
],
[
"The",
"islands",
"were",
"a",
"part",
"of",
"the",
"administrative",
"Bromma",
"Parish",
"until",
"1916",
",",
"when",
"they",
"were",
"incorporated",
"with",
"the",
"parish",
"into",
"Stockholm",
"Municipality",
"."
],
[
"They",
"remained",
"a",
"part",
"of",
"Bromma",
"ecclesiastical",
"parish",
"until",
"1955",
",",
"when",
"they",
"received",
"their",
"own",
"parish",
"within",
"the",
"Church",
"of",
"Sweden",
"."
],
[
"A",
"bridge",
"was",
"built",
"between",
"the",
"islands",
"and",
"Kungsholmen",
"in",
"1907",
",",
"and",
"between",
"the",
"islands",
"themselves",
"in",
"1917",
"."
],
[
"In",
"1966",
",",
"the",
"Essingeleden",
"motorway",
"opened",
"across",
"the",
"islands",
"."
],
[
"The",
"Alviksbron",
"bridge",
"(",
"for",
"pedestrians",
",",
"bicycles",
",",
"and",
"trams",
")",
"opened",
"in",
"2000",
"."
]
],
"title": "Essingen Islands",
"vertexSet": [
[
{
"name": "Essingen Islands",
"pos": [
1,
3
],
"sent_id": 0,
"type": "LOC"
}
],
[
{
"name": "two",
"pos": [
7,
8
],
"sent_id": 0,
"type": "NUM"
}
],
[
{
"name": "Stora Essingen",
"pos": [
10,
12
],
"sent_id": 0,
"type": "LOC"
},
{
"name": "Stora Hessingen",
"pos": [
8,
10
],
"sent_id": 1,
"type": "LOC"
}
],
[
{
"name": "Lilla Essingen",
"pos": [
13,
15
],
"sent_id": 0,
"type": "LOC"
},
{
"name": "Lilla Hessingen",
"pos": [
11,
13
],
"sent_id": 1,
"type": "LOC"
}
],
[
{
"name": "Swedish",
"pos": [
18,
19
],
"sent_id": 0,
"type": "LOC"
}
],
[
{
"name": "Mälaren",
"pos": [
21,
22
],
"sent_id": 0,
"type": "LOC"
}
],
[
{
"name": "Kungsholmen",
"pos": [
26,
27
],
"sent_id": 0,
"type": "LOC"
},
{
"name": "Kungsholmen",
"pos": [
8,
9
],
"sent_id": 4,
"type": "LOC"
}
],

[
{
"name": "Stockholm",
"pos": [
28,
29
],
"sent_id": 0,
"type": "LOC"
}
],
[
{
"name": "Bromma Parish",
"pos": [
8,
10
],
"sent_id": 2,
"type": "LOC"
}
],
[
{
"name": "1916",
"pos": [
11,
12
],
"sent_id": 2,
"type": "TIME"
}
],
[
{
"name": "Stockholm Municipality",
"pos": [
21,
23
],
"sent_id": 2,
"type": "LOC"
}
],
[
{
"name": "Bromma",
"pos": [
5,
6
],
"sent_id": 3,
"type": "LOC"
}
],
[
{
"name": "1955",
"pos": [
9,
10
],
"sent_id": 3,
"type": "TIME"
}
],
[
{
"name": "Church of Sweden",
"pos": [
19,
22
],
"sent_id": 3,
"type": "LOC"
}
],
[
{
"name": "1907",
"pos": [
10,
11
],
"sent_id": 4,
"type": "TIME"
}
],
[
{
"name": "1917",
"pos": [
18,
19
],
"sent_id": 4,
"type": "TIME"
}
],
[
{
"name": "1966",
"pos": [
1,
2
],
"sent_id": 5,
"type": "TIME"
}
],
[
{
"name": "Essingeleden motorway",
"pos": [
4,
6
],
"sent_id": 5,
"type": "LOC"
}
],
[
{
"name": "Alviksbron bridge",
"pos": [
1,
3
],
"sent_id": 6,
"type": "LOC"
}
],
[
{
"name": "2000",
"pos": [
14,
15
],
"sent_id": 6,
"type": "TIME"
}
]
]
}

codalab issue

Getting following error while trying to submit result in codalab:

Failed, competition phase is being migrated, please try again in a few minutes

Regards
Tapas

关于vertexSet数据结构的问题。

image
image
image

您好:我想请问一下这个Set数组里面的数据,为什么数组里面还要嵌套一层呢?里面不是按照句子id分组,也不是按照实体类型分组。这么混合组合是表示什么呢?后面的labels里面有h和t表示的是外层数据集的index的位置,那h和t组成的关系是一个vertexSet数组外层元素里面嵌套数组所有实体之间都存在这种关系吗?
万分感谢!!

内存占用

你好,训练的时候内存占用越来越多,最多的时候是100多G,请问这个是什么原因呢?

运行时间

你好,想请问下用BiLSTM训练annotated数据集大概要多久呢?我在训练的时候发现cpu占用特别高,gpu利用率特别低,想知道这是什么原因呢,感谢分享。

label问题

还想再请问一下您,relation_multi_lable、relation_label、relation_mask他们的作用分别是什么?有什么不同吗?
谢谢您!

missing words

In some documents, it seems like words are missing e.g. ['The', 'episode', 'is', 'noted', 'as', 'playing', 'homage', 'to', 'the', 'tradition', 'of', '"', 'trashy', 'science', 'fiction', 'horror', '"', 'vibe', 'in', 'the', 'Star', 'Trek', 'universe', 'as', 'portrayed', 'by', 'episodes', 'like', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', ',', '"', 'Dark', 'Page', '"', ',', '"', '"', ',', '"', '"', ',', '"', '"', 'or', '"', '"', '.']

Any comment on this?

dev和test集的评测脚本是否有问题?

你好,非常棒的工作,我有一个问题,请问为什么我跑出来的dev ign F1和test ign F1都比paper公布的要高出好几个点?是不是现在代码的评测脚本和你们最开始跑baseline的评测脚本有不一致?比如recall这块,我看之前也有人提过这个问题About total_recall_ignore #31

系统环境:
cuda: 10.1
pytorch: 1.1.0
GPU: one Titan XP
实验配置:没有改过repo的代码

下面是我跑出来的dev的结果:

1

下面是LB上得到的test集的结果:

2

可以看到dev和test集上的F1和paper公布的差不多,但是dev和test的ign F1都比paper公布的高了3.5-3.7个点,在dev的Ign AUC上更是夸张,高了6.6个点,请问这是为什么呢?

Supporting Evidence

Hi,

In your paper in Table 7, the Neural Predictor is trained separately or it is trained together with models in Table 4?

Regards
Tapas

Questions about Theta

Hello, thank you for sharing. What does Theta stand for in training? When I was doing my experiments, Theta was always high, up to 0.9, and I didn't change the Theta part of the code. How do you keep Theta stable during training? Thank you.

How to use current evaluation script?

Hi, I found you updated the evaluation script. But the directory structure is inconsistent with current train/test code. Can you describe how to use it?

Learning rate

Hello there! Thank you for sharing the code. May I ask where the learning rate in the code is reflected, I am very sorry, I did not find it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.