feralvam / easse Goto Github PK
View Code? Open in Web Editor NEWEasier Automatic Sentence Simplification Evaluation
License: GNU General Public License v3.0
Easier Automatic Sentence Simplification Evaluation
License: GNU General Public License v3.0
Kindly provide an example of how to calculate fkgl and bleu in python.
regards
Hello, I am running EASSE on a PyCharm virtual environment with Python 3.7 and all metrics except for SAMSA are working. I already installed tupa and I fixed the following error message by using the pip install protobuf==3.20.*
command:
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
Now I can execute SAMSA but it is still not working. This is my console output:
G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\venv\Scripts\python.exe" "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\run_dennis.py"
Warning: SAMSA metric is long to compute (120 sentences ~ 4min), disable it if you need fast evaluation.
Loading spaCy model 'en_core_web_md'... Done (33.254s).
Loading from 'G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\models\ucca-bilstm.json'.
[dynet] random seed: 1
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
[dynet] 2.1
Loading from 'G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\models\ucca-bilstm.enum'... Done (0.121s).
Loading model from 'G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\models\ucca-bilstm': 23param [02:14, 5.86s/param]
Loading model from 'G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\models\ucca-bilstm': 100%|██████████| 23/23 [02:06<00:00, 5.51s/param]
Loading from 'G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\models\ucca-bilstm.nlp.json'.
tupa --hyperparams "shared --lstm-layers 2" "amr --max-edge-labels 110 --node-label-dim 20 --max-node-labels 1000 --node-category-dim 5 --max-node-categories 25" "sdp --max-edge-labels 70" "conllu --max-edge-labels 60" --log parse.log --max-words 0 --max-words-external 249861 --vocab G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\vocab\en_core_web_lg.csv --word-vectors ../word_vectors/wiki.en.vec
Loading 'G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\ucca-bilstm-1.3.10\vocab\en_core_web_lg.csv': 1340694 rows [00:06, 218144.04 rows/s]
2 passages [00:01, 1.05 passages/s, en ucca=1_0]
Starting server with command: java -Xmx5G -cp G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\resources\tools\stanford-corenlp-full-2018-10-05/* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 60000 -threads 40 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-21b4f872deb94b0d.props -preload tokenize,ssplit,pos,lemma,ner,depparse
Traceback (most recent call last):
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\run_dennis.py", line 23, in <module>
sys_sents=["About 95 you now get in.", "Cat on mat."])
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\samsa.py", line 305, in corpus_samsa
return np.mean(get_samsa_sentence_scores(orig_sents, sys_sents, lowercase, tokenizer, verbose))
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\samsa.py", line 281, in get_samsa_sentence_scores
verbose=verbose,
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\samsa.py", line 30, in syntactic_parse_ucca_scenes
verbose=verbose,
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\easse\aligner\corenlp_utils.py", line 144, in syntactic_parse_texts
raw_parse_result = client.annotate(text)
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\venv\lib\site-packages\stanfordnlp\server\client.py", line 398, in annotate
r = self._request(text.encode('utf-8'), request_properties, **kwargs)
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\venv\lib\site-packages\stanfordnlp\server\client.py", line 311, in _request
self.ensure_alive()
File "G:\My Drive\M5\Masterarbeit\implementation_metrics\easse\venv\lib\site-packages\stanfordnlp\server\client.py", line 137, in ensure_alive
raise PermanentlyFailedException("Timed out waiting for service to come alive.")
stanfordnlp.server.client.PermanentlyFailedException: Timed out waiting for service to come alive.
Process finished with exit code 1
Is the problem maybe because the folder "My Drive" has a space? I haven't changed this folder name because changing it is quite a hassle.
Qualitative Outputs (e.g. Randomly sampled simplifications) in the HTML report don't have Source and Prediction labels. It is hard to judge if the text is the source or predictions. Labels/Headings for the text would be nice.
Hello, it seems that the installation of 'simalign' and 'bert_score' packages is necessary to run an easse command, however they are not present in requirement.txt (or cited as requirements)
Hi, I have observed a particular situation with the SARI implementation where system outputs can receive a <100 score even when they are identical to the reference (where there is only a single reference).
Basically, if a reference does not introduce new tokens, it will receive a 0.00 unigram add-score, but 100 for all n>1-grams.
Take the following example:
sources=["Shu Abe (born June 7 1984) is a former Japanese football player."]
predictions=["Shu Abe (born June 7 1984) is a Japanese football player."]
references=[["Shu Abe (born June 7 1984) is a Japanese football player."]]
sari_score = corpus_sari(sources, predictions, references)
print(sari_score)
>>> 91.66666666666667
In this case, the add score will be 75.0 because there are no new unigrams (because of the if sys_total > 0:
checks in compute_precision_recall_f1()
) but there are technically new bigrams, trigrams, and 4-grams around the location of the deleted word (["a japanese", "a japanese football", "is a japanese"]
, etc.).
I am just curious of whether this is the expected behaviour or if a definitive 0.00 or 100.0 result for the add-score would be more desirable?
Thanks in advance for any insight.
I run it successfully.. Thank you so much.. I am so glad that I reached this point..
On the other hand, when I am trying to apply it on other custom datasets (mine), It result in this error. As the files is present in the current directory.
looking forward to your reply.
Originally posted by @ykkhan in #69 (comment)
File "/home/***/easse/quality_estimation.py", line 3, in
from tseval.feature_extraction import (get_compression_ratio, count_sentence_splits, ......
ModuleNotFoundError: No module named 'tseval'
EASSE has been installed successfully, but when I run the command-line interface with the easse command.
easse: command not found
Dear Fernando,
Thank you for developing Easse tool. It helps me a lot. However, I’m trying to use SAMSA metric on my output but it fails to compute it. Could you help me to solve it? I tried to download SAMSA but the tool suffers from insufficient info about how to use it and I didn’t understand the code.
Here is the error message:
rita@rita-VirtualBox:~/easse$ easse evaluate -t turkcorpus_test -m 'samsa' -q < easse/resources/data/system_outputs/turkcorpus/test/R
Warning: SAMSA metric is long to compute (120 sentences ~ 4min), disable it if you need fast evaluation.
Loading spaCy model 'en_core_web_md'... Done (76.791s).
Loading from '/home/rita/.local/lib/python3.8/site-packages/easse/resources/tools/ucca-bilstm-1.3.10/models/ucca-bilstm.json'.
[dynet] random seed: 1
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
[dynet] 2.1.2
Loading from '/home/rita/.local/lib/python3.8/site-packages/easse/resources/tools/ucca-bilstm-1.3.10/models/ucca-bilstm.enum'... Done (0.295s).
Loading model from '/home/rita/.local/lib/python3.8/site-packages/easse/resourceKilled
Thanks in advance!
Everyone has encountered that every time a reference is added, the bleu score will increase and the sari will decrease. Then when I add all the refs, the bleu score will be very high and the sari will be very low.
Hi everyone,
I'm interested in using this feature: Referenceless Quality Estimation to compare the inputs and the system-generated output.
However, it always asks for the test set. Is this something I can compute with ease? This is my command line:
easse evaluate --orig_sents_path file1.txt --sys_sents_path file2.txt -m fkgl
If so, what features are available in this setting besides FKGL?
Thanks for your support,
Laura
WHEN I "pip install ."
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://pypi.org/simple/tseval/
Why? Thanks.
It would be nice to have a "show more examples" option in the report
Line 171 in f8bcb43
The corpus_level=True
argument might be confusing for users, I think we should rename it to something like use_deletion_recall
.
Sir/Ma'am
Kindly provide system outputs for MUSST system for PWKP and Mturk corpus.
https://www.aclweb.org/anthology/I17-3007.pdf
regards
when i want to get the report for mulriple system report i get this error,
when run easse report -t asset_test -i ./ACCESS -p /Users/man/Desktop/MonkAcademic/academic/monkSS/evaluate/result/report/SBMT-SARI-asset.html
and easse report -t asset_test -i ./SBMT-SARI -p /Users/man/Desktop/MonkAcademic/academic/monkSS/evaluate/result/report/SBMT-SARI-asset.html
it is fine
why this error?
Hi,
I am working in adding the QE features to easse, I have two questions:
I have a bunch of features that can be computed either on the prediction (i.e. length, complexity, lm proba ...) or on both the source and prediction (compression ratio, word embeddings comparison). They can all be found here: https://github.com/facebookresearch/text-simplification-evaluation/blob/master/tseval/feature_extraction.py
What do we want to do with those? Is there a subset of interesting features that we want to include in the evaluate script?
Two options on how to integrate them:
a. Install and import tseval as an external package (most straightforward)
b. Integrate tseval features to easse (might not be very useful)
I would suggest choosing a.
I am trying to evaluate a customized set of asset data. However, I am not very sure how to use the syntax. Currently I am using this syntax
!easse evaluate --refs_sents_paths ref_data --orig_sents_path orig_data --sys_sents_path test_pred_dir -t custom -m 'bleu,sari,fkgl' -q < easse/easse/resources/data/system_outputs/asset/test
which will report this error:
`
Fatal Python error: _PySys_BeginInit: is a directory, cannot continue
Current thread 0x00007f5625e9b780 (most recent call first):
`
I am using google collar, and all the path can correctly access the files.
Can you please let me know at which step is my syntax wrong ? Thank you so much!
Line 33 in f8bcb43
-tok moses
might raise an error, we need to double check.
Add an option to use a custom dataset for evaluation and computation of the metrics.
Hi,
When I was doing evaluation the error module 'sacrebleu' has no attribute 'TOKENIZERS'
occurs. I think it is because sacrebleu has a latest version of 2.0.0. install sacrebleu==1.5.1 fixes this error :)
Hello to all you wonderful people,
I've been using your framework for months and I must say, it's great!
However, recently the same package installation command which is pip install .
has been running into errors and fails to build a wheel for easse.
I share the error message here for more information.
(It should be noted that this is a standard Google Colab notebook and pip is upgraded to the latest version.)
Processing /content/gdrive/My Drive/EASSE/easse
Preparing metadata (setup.py) ... done
.
.
.
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from python-Levenshtein->tseval@ git+https://github.com/facebookresearch/text-simplification-evaluation.git->easse==0.2.4) (57.4.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->simalign@ git+https://github.com/cisnlp/simalign.git->easse==0.2.4) (3.0.0)
Collecting smmap<6,>=3.0.1
Downloading smmap-5.0.0-py3-none-any.whl (24 kB)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers>=3.0.0numpy->bert_score->easse==0.2.4) (3.6.0)
Building wheels for collected packages: easse, nltk, simalign, tseval, yattag, python-Levenshtein
Building wheel for easse (setup.py) ... errorERROR: Failed building wheel for easse
Running setup.py clean for easse
Building wheel for nltk (setup.py) ... done
Created wheel for nltk: filename=nltk-3.4.3-py3-none-any.whl size=1448609 sha256=46052f128d317e2f399f382e992a766ee4e58e8cbe73488125ed811cad6bd10d
Stored in directory: /root/.cache/pip/wheels/8f/12/6d/7d1ecf74380e441128c7895cafb1931c746b484237be23a229
Building wheel for simalign (pyproject.toml) ... done
Created wheel for simalign: filename=simalign-0.3-py3-none-any.whl size=8101 sha256=cefdc8c81226f8c6a1e8a237242dd88aaace6e06a6fa35e8e1adfb343af7cd22
Stored in directory: /tmp/pip-ephem-wheel-cache-aypivvq8/wheels/7c/fd/e8/feb79b708710c76e78b833a417552cadc858dd3d2ee5897585
.
.
.
Failed to build easse
Installing collected packages: smmap, pyyaml, tokenizers, sacremoses, huggingface-hub, gitdb, transformers, python-Levenshtein, portalocker, nltk, networkx, gitpython, colorama, yattag, tseval, stanfordnlp, simalign, sacrebleu, bert-score, easse
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Attempting uninstall: nltk
Found existing installation: nltk 3.2.5
Uninstalling nltk-3.2.5:
Successfully uninstalled nltk-3.2.5
Attempting uninstall: networkx
Found existing installation: networkx 2.6.3
Uninstalling networkx-2.6.3:
Successfully uninstalled networkx-2.6.3
Running setup.py install for easse ... errorERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/content/gdrive/My Drive/EASSE/easse/setup.py'"'"'; file='"'"'/content/gdrive/My Drive/EASSE/easse/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-qi6q_mqi/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/easse Check the logs for full command output.
Hello, can someone help me understand the difference between EASSE and huggingface/datasets SARI computation? Using the defaults for both libraries, I see aa 3-6 point discrepancy on the ASSET benchmark for some models I fine-tuned.
I understand that EASSE has use_f1_for_deletion=True
as its default, and datasets uses precision. But with use_f1_for_deletion
set to False in EASSE, I still see a small difference in SARI score (~0.6) between the two libraries.
Thanks!
(torchenv) C:\Users\avish>cd easse
ERROR: Command errored out with exit status 1: 'c:\users\avish\anaconda3\envs\torchenv\python.exe' 'c:\users\avish\anaconda3\envs\torchenv\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\avish\AppData\Local\Temp\pip-build-env-v4tl30sz\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0.<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' thinc==7.0.0.dev6 Check the logs for full command output.
(torchenv) C:\Users\avish\easse>_
From facebookresearch/access#10
File "/usr/local/lib/python3.6/dist-packages/easse/cli.py", line 130, in evaluate_system_output
lowercase=lowercase)
File "/usr/local/lib/python3.6/dist-packages/easse/bleu.py", line 22, in corpus_bleu
sys_sents = [utils_prep.normalize(sent, lowercase, tokenizer) for sent in sys_sents]
File "/usr/local/lib/python3.6/dist-packages/easse/bleu.py", line 22, in
sys_sents = [utils_prep.normalize(sent, lowercase, tokenizer) for sent in sys_sents]
File "/usr/local/lib/python3.6/dist-packages/easse/utils/preprocessing.py", line 12, in normalize
normalized_sent = sacrebleu.tokenize_13a(sentence)
AttributeError: module 'sacrebleu' has no attribute 'tokenize_13a'
If I try to do:
from bert_score import Bert_Scorer
I get
AttributeError: module 'matplotlib.cbook' has no attribute '_make_class_factory'
I can fix this by installing matpotlib 3.4.3, in which case the error goes.
I can put 3.4.3 in the matplotlib reqs for EASSE, but really this should be solved in the BertScore reqs. I'm not sure if putting it in bertscore's reqs will propagate through to EASSE on instalation though, or if easse's req for the latest matplotlib will override this. Any strong opinions on where to put this?
Hi, thanks for making this available! I'm running the corpus_sari() function according to your example and I'm seeing some oddness with the scores. For example, system sentences without any token overlap with the source or references are being scored higher than those with overlap. Here's an example of a sentence without overlap:
corpus_sari(orig_sents=["About 95 species are currently accepted ."],
sys_sents=["This is my simplified sentence that has no token overlap with the source or reference sentences."],
refs_sents=[["About 95 species are currently known .", "About 95 species are now accepted .", "95 species are now accepted ."]])
Out[4]: 19.246031746031743
Whereas I get a lower score for this sentence that does have overlap:
corpus_sari(orig_sents=["About 95 species are currently accepted ."],
sys_sents=["species accepted ."],
refs_sents=[["About 95 species are currently known .", "About 95 species are now accepted .", "95 species are now accepted ."]])
Out[5]: 16.402116402116405
I get different results using the code in https://github.com/XingxingZhang/pysari that also implements SARI: ~16.078 for my first example system sentence above and ~24.05 for the second example, which is the score ordering I'd expect though I haven't verified the correctness of anything. Is there a parameter setting that needs to be specified when calling the corpus_sari() function that's affecting the results? Thanks!
I am trying to run this command
easse evaluate -t turkcorpus_test -m 'bleu,sari' -q < easse/resources/data/system_outputs/turkcorpus/test/ACCESS
and I got this error
UnboundLocalError: local variable 'NISTTokenizer' referenced before assignment
It seems like, in the easse repository, the requirements file has been updated 9 months ago to replace 'sklearn' with 'scikit-learn'. However, another of your requirements, git+tseval, still includes 'sklearn' in there requirements. This causes some installation error.
I tried EASSE before and it was working with me. Now I am trying to run it and getting errors, is there any problem with it?
ERROR: Failed building wheel for tokenizers
Successfully built tseval
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
I want to compute SAMSA score of corpus.. Can I use EASSE Technique to find out Perfect SAMSA score to Evaluate corpus?
report
is smaller than the one for evaluate
. Maybe make both of them bigger?When comparing multiple systems, it would be useful to perform the most appropriate test to determine if the differences in scores for each metric are statistically significant. We could probably reuse some of the code from https://github.com/rtmdrr/testSignificanceNLP
Some requirements such as sklearn or tqdm are not automatically installed
hello i want to install easse in Google Colab but i get this error can help me
Traceback (most recent call last):
File "/usr/local/bin/easse", line 33, in
sys.exit(load_entry_point('easse==0.2.1', 'console_scripts', 'easse')())
File "/usr/local/bin/easse", line 25, in importlib_load_entry_point
return next(matches).load()
File "/usr/local/lib/python3.6/dist-packages/importlib_metadata/init.py", line 96, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 941, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'easse'
HSplit seems to contain some escaped quotes with "
on the simplified side but not on the source side.
I have a dataset for simplification in which I have one complex sentence and a varying number of simplifications.
For example, the first complex sentence can have 5 human-written simplifications, while the second only 3.
Is there a way to set easse to work in this case?
hello, i'm trying to run easse on my own custom .csv files using the following command -
easse report -t custom --orig_sents_path "turksimp.csv" --refs_sents_paths "turksimpbacktranslated.csv"
however, it's taking a really long time and nothing is being outputted, so i suspect there's some error but it's just not showing the error and stopping. this happens to me a lot when i try to run such commands, wondering if there is any fix?
thanks!
Hi!
I've just downloaded this repo for installation and I got the ascii-related encoding error:
$ pip install .
Processing /tmp/easse
ERROR: Command errored out with exit status 1:
command: (..)/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-5iyt237l/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-5iyt237l/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-9kwmmcl4
cwd: /tmp/pip-req-build-5iyt237l/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-5iyt237l/setup.py", line 5, in <module>
long_description = f.read()
File "(..)/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 5599: ordinal not in range(128)
This is fixed by adding "encoding='utf-8'" to the following lines in the setup.py script:
with open("README.md", "r", encoding='utf-8') as f:
long_description = f.read()
with open("requirements.txt", "r", encoding='utf-8') as f:
requirements = f.read().strip().split("\n")
In case someone else gets the same error, it would be great to update it :)
Best,
Laura
We should add all the HSplit sentences to the repo and take the first 70 only if samsa is used.
Line 71 in eda5623
Shouldn't we use lowercase=True
with turk valid and test sets?
I can't find the defination of the function sentence_sari()
Has it been deprecated?
Hi all,
I got this error message when computing SAMSA.
Here is my code:
from easse.samsa import sentence_samsa
ori_sent = 'I read the book that John wrote.'
simp_sent = 'John wrote a book. I read that book.'
sentence_samsa(orig_sent=ori_sent, sys_sent=simp_sent)
Here is the error message:
Starting server with command: java -Xmx5G -cp /Applications/anaconda3/envs/OpenNLP/lib/python3.7/site-packages/easse/resources/tools/stanford-corenlp-full-2018-10-05/* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 60000 -threads 40 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-0b93463ac4344542.props -preload tokenize,ssplit,pos,lemma,ner,depparse
File "/Applications/anaconda3/envs/OpenNLP/lib/python3.7/site-packages/stanfordnlp/server/client.py", line 137, in ensure_alive
raise PermanentlyFailedException("Timed out waiting for service to come alive.")
PermanentlyFailedException: Timed out waiting for service to come alive.
easse/easse/quality_estimation.py
Line 27 in 1ba21b0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.