malllabiisc / cesi Goto Github PK
View Code? Open in Web Editor NEWWWW 2018: CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information
License: Apache License 2.0
WWW 2018: CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information
License: Apache License 2.0
I noticed that when at this line the subject embeddings and relation embeddings are passed for clustering, and then the cluster representative is found using (possibly) wrong ent2freq dictionary here. The subject embeddings dict contains 11878 subjects, whereas the ent2freq dict contains 23219 entities. The ent2freq dict maps from entity, and not subject, to its frequency i.e. there is a mismatch in entity id and subject id. Could you please clarify this? I am happy to elaborate my concern if needed.
Hi, the version of python i used is 3.6, and others are installed by following the requirement. However, throw the value error ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216, which means the version of numpy is too low. But when i updated numpy, ran again, it will throw file not found error FileNotFoundError: [Errno 2] No such file or directory: './output/reverb45_test_run/triples.txt', which is reall strange.
Is there any solution to solve it?
Thanks!
After running all installation steps, I run the script python src/cesi_main.py -name reverb45_test_run
. Then, I get the following error:
2020-10-02 11:45:51,494 - [INFO] - Running reverb45_test_run
2020-10-02 11:45:51,494 - [INFO] - Reading Triples
2020-10-02 11:45:51,494 - [INFO] - Loading cached triples
2020-10-02 11:45:52,521 - [INFO] - Side Information Acquisition
Error! Status code :500
Traceback (most recent call last):
File "src/cesi_main.py", line 234, in <module>
cesi.get_sideInfo() # Side Information Acquisition
File "src/cesi_main.py", line 89, in get_sideInfo
self.side_info = SideInfo(self.p, self.triples_list, self.amb_mentions, self.amb_ent, self.isAcronym)
File "/home/kgashteovski/cesi/cesi/src/sideInfo.py", line 24, in __init__
self.fixTypos(amb_ent, amb_mentions, isAcronym)
File "/home/kgashteovski/cesi/cesi/src/sideInfo.py", line 413, in fixTypos
ent2ppdb = getPPDBclustersRaw(self.p.ppdb_url, sub_list)
File "/home/kgashteovski/cesi/cesi/src/helper.py", line 118, in getPPDBclustersRaw
if rep_list[i] == None: continue # If no representative for phr then skip
TypeError: 'NoneType' object is not subscriptable
Any idea how to fix this quickly?
Firstly, for every triple in ReVerb, we extracted the source text from Clueweb09 corpus from which the triple was generated. In this process, we rejected triples for which we could not find any source text
Can the source text document information be please shared?
Suppose we have very large data set with millions of OIE triples. In such scenario, CESI runs out of memory. What are the most memory intensive side information procedures that one can ignore in order to get canonicalized data from a large dataset, while at the same time not damaging precision too much?
Suppose I have some dataset (other than Reverb45k) which I formatted previously in the format described on the README page. How can I run CESI to learn the canonicalizations from the other dataset?
I read your paper "CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information", and I see the following remark for the Reverb45k dataset construction:
"Through these steps, we obtained 45K high-quality triples which we used for evaluation. We call this resulting dataset ReVerb45K."
I see the 45k triples are shown in the data\reverb45k\reverb45k_valid.txt + data\reverb45k\reverb45k_test.txt files in this github repo.
Can you explain how these two .txt files alone are used to create the automatically learned embeddings from the objective function in section 5, as well as compute the F1 scores in evaluation? Is there no training dataset separate from these two .txt files in your github repo?
Hi,
I would like to run for my own custom dataset. I understand the dataset needs to be in the json format mentioned in README section. However I would like to know how did you extract "entity_linking" and "true_link" given a triplet in KB?
def getPairs(id_list, id2clust, mode = 'm2o'):
pairs = set()
map_clust = dict()
for ele in id_list:
if ele in id2clust: map_clust[ele] = id2clust[ele]
Z = len(map_clust.keys())
clusters = invertDic(map_clust, mode)
for _, v in clusters.items():
pairs.union(itertools.combinations(v, 2))
return list(pairs), Z
In this code, itertools.combinations(v,2) returns just a itertoolsclass object, so it should be typecast to set, also pair.union didn't update pair inplace and hence that line should be changed to pairs=pairs.union(set(itertools.combinations(v, 2)))
. I am using python3.
If I rerun with this modification on reverb45k dataset, then pairwise accuracy seems to drop, could you please help?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.