Hi, thanks for a very interesting work and for open-sourcing the models.
I have a clean python3.8 virtualenv and I've installed this repository and dependencies with poetry, but I'm not able to run comet-score for any of the checkpoints:
comet-score -s ../out/news18_csen.beam20.trans -t ../news18_csen.en.snt -r ../news18_csen.en.snt --model models/comet-wl-tags/checkpoints/epoch=1-step=206468.ckpt
/home/jon/.local/lib/python3.8/site-packages/scipy/init.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.3
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
Global seed set to 12
Created a temporary directory at /tmp/tmpmttoa89o
Writing /tmp/tmpmttoa89o/_remote_module_non_scriptable.py
Some weights of the model checkpoint at xlm-roberta-large were not used when initializing XLMRobertaModel: ['roberta.pooler.dense.weight', 'lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'roberta.pooler.dense.bias', 'lm_head.dense.bias', 'lm_head.bias', 'lm_head.layer_norm.bias']
- This IS expected if you are initializing XLMRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Encoder model frozen.
Traceback (most recent call last):
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/bin/comet-score", line 6, in
sys.exit(score_command())
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/comet/cli/score.py", line 191, in score_command
model = load_from_checkpoint(model_path)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/comet/models/init.py", line 72, in load_from_checkpoint
model = model_class.load_from_checkpoint(checkpoint_path, **hparams)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 161, in load_from_checkpoint
model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 209, in _load_model_state
keys = model.load_state_dict(checkpoint["state_dict"], strict=strict)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for RegressionMetric:
Missing key(s) in state_dict: "reproject_embed_layer.weight", "reproject_embed_layer.bias".
size mismatch for encoder.model.embeddings.word_embeddings.weight: copying a param with shape torch.Size([250005, 1024]) from checkpoint, the shape in current model is torch.Size([250002, 1024]).
size mismatch for estimator.ff.0.weight: copying a param with shape torch.Size([3072, 8192]) from checkpoint, the shape in current model is torch.Size([3072, 6144]).
comet-score -s ../out/news18_csen.beam20.trans -t ../news18_csen.en.snt -r ../news18_csen.en.snt --model models/comet-sl-feats/checkpoints/epoch=1-step=237518.ckpt
/home/jon/.local/lib/python3.8/site-packages/scipy/init.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.3
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
Global seed set to 12
Created a temporary directory at /tmp/tmpucdssx8o
Writing /tmp/tmpucdssx8o/_remote_module_non_scriptable.py
Some weights of the model checkpoint at xlm-roberta-large were not used when initializing XLMRobertaModel: ['lm_head.layer_norm.weight', 'lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.bias', 'roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
- This IS expected if you are initializing XLMRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Encoder model frozen.
Traceback (most recent call last):
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/bin/comet-score", line 6, in
sys.exit(score_command())
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/comet/cli/score.py", line 191, in score_command
model = load_from_checkpoint(model_path)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/comet/models/init.py", line 72, in load_from_checkpoint
model = model_class.load_from_checkpoint(checkpoint_path, **hparams)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 161, in load_from_checkpoint
model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 209, in _load_model_state
keys = model.load_state_dict(checkpoint["state_dict"], strict=strict)
File "/lnet/work/people/jon/ga_clean/robust_MT_evaluation/env2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for RegressionMetric:
Missing key(s) in state_dict: "reproject_embed_layer.weight", "reproject_embed_layer.bias".
Additionally, I think the comet-aug archive is incomplete, I'm getting EOF errors when trying to extract it. What can I do to solve these problems?