Giter VIP home page Giter VIP logo

cesi's People

Contributors

parthatalukdar avatar svjan5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cesi's Issues

Why representative of cluster using ent2freq and NOT sub2freq dict?

I noticed that when at this line the subject embeddings and relation embeddings are passed for clustering, and then the cluster representative is found using (possibly) wrong ent2freq dictionary here. The subject embeddings dict contains 11878 subjects, whereas the ent2freq dict contains 23219 entities. The ent2freq dict maps from entity, and not subject, to its frequency i.e. there is a mismatch in entity id and subject id. Could you please clarify this? I am happy to elaborate my concern if needed.

ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216

Hi, the version of python i used is 3.6, and others are installed by following the requirement. However, throw the value error ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216, which means the version of numpy is too low. But when i updated numpy, ran again, it will throw file not found error FileNotFoundError: [Errno 2] No such file or directory: './output/reverb45_test_run/triples.txt', which is reall strange.
Is there any solution to solve it?
Thanks!

error in getPPDBclustersRaw

After running all installation steps, I run the script python src/cesi_main.py -name reverb45_test_run . Then, I get the following error:

2020-10-02 11:45:51,494 - [INFO] - Running reverb45_test_run
2020-10-02 11:45:51,494 - [INFO] - Reading Triples
2020-10-02 11:45:51,494 - [INFO] -      Loading cached triples
2020-10-02 11:45:52,521 - [INFO] - Side Information Acquisition
Error! Status code :500
Traceback (most recent call last):
  File "src/cesi_main.py", line 234, in <module>
    cesi.get_sideInfo() # Side Information Acquisition
  File "src/cesi_main.py", line 89, in get_sideInfo
    self.side_info = SideInfo(self.p, self.triples_list, self.amb_mentions, self.amb_ent, self.isAcronym)
  File "/home/kgashteovski/cesi/cesi/src/sideInfo.py", line 24, in __init__
    self.fixTypos(amb_ent, amb_mentions, isAcronym)
  File "/home/kgashteovski/cesi/cesi/src/sideInfo.py", line 413, in fixTypos
    ent2ppdb = getPPDBclustersRaw(self.p.ppdb_url, sub_list)
  File "/home/kgashteovski/cesi/cesi/src/helper.py", line 118, in getPPDBclustersRaw
    if rep_list[i] == None: continue        # If no representative for phr then skip
TypeError: 'NoneType' object is not subscriptable

Any idea how to fix this quickly?

Regarding Source Text of Sentences of Reverb 45

Firstly, for every triple in ReVerb, we extracted the source text from Clueweb09 corpus from which the triple was generated. In this process, we rejected triples for which we could not find any source text

Can the source text document information be please shared?

Using CESI with large dataset

Suppose we have very large data set with millions of OIE triples. In such scenario, CESI runs out of memory. What are the most memory intensive side information procedures that one can ignore in order to get canonicalized data from a large dataset, while at the same time not damaging precision too much?

Process custom data

Suppose I have some dataset (other than Reverb45k) which I formatted previously in the format described on the README page. How can I run CESI to learn the canonicalizations from the other dataset?

Training, Validation, and Test Splits?

I read your paper "CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information", and I see the following remark for the Reverb45k dataset construction:

"Through these steps, we obtained 45K high-quality triples which we used for evaluation. We call this resulting dataset ReVerb45K."

I see the 45k triples are shown in the data\reverb45k\reverb45k_valid.txt + data\reverb45k\reverb45k_test.txt files in this github repo.

Can you explain how these two .txt files alone are used to create the automatically learned embeddings from the objective function in section 5, as well as compute the F1 scores in evaluation? Is there no training dataset separate from these two .txt files in your github repo?

Custom Dataset "entity_linking" and "true_link"

Hi,
I would like to run for my own custom dataset. I understand the dataset needs to be in the json format mentioned in README section. However I would like to know how did you extract "entity_linking" and "true_link" given a triplet in KB?

Error in src/skge/util.py

def getPairs(id_list, id2clust, mode = 'm2o'):
	pairs = set()
	map_clust = dict()

	for ele in id_list:
		if ele in id2clust: map_clust[ele] = id2clust[ele]

	Z = len(map_clust.keys())

	clusters = invertDic(map_clust, mode)

	for _, v in clusters.items():
		pairs.union(itertools.combinations(v, 2))

	return list(pairs), Z

In this code, itertools.combinations(v,2) returns just a itertoolsclass object, so it should be typecast to set, also pair.union didn't update pair inplace and hence that line should be changed to pairs=pairs.union(set(itertools.combinations(v, 2))). I am using python3.

If I rerun with this modification on reverb45k dataset, then pairwise accuracy seems to drop, could you please help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.