Giter VIP home page Giter VIP logo

ser-with-w2v2's Introduction

Official implementation of 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'

Requirements:

We recommend running these scripts using a virtual environment like Anaconda, which should have Tensorflow 2.4.1 and PyTorch 1.7.1 installed.

Install required python packages:

pip install -r requirements.txt

Install sox and libmediainfo in your system

sudo apt-get install sox
sudo apt-get install libmediainfo-dev

RAVDESS and IEMOCAP datasets need to be downloaded and placed at ~/Datasets with a folder structure like this:

├── IEMOCAP
│   ├── Documentation
│   ├── README.txt
│   ├── Session1
│   ├── Session2
│   ├── Session3
│   ├── Session4
│   └── Session5
└── RAVDESS
    └── RAVDESS
        ├── song
        └── speech

Replicating our experiments

In our paper we run many different experiments using 5 seeds for each one. If you want to replicate that procedure, run in a terminal:

./run_seeds.sh <output_path>

If you want to run just 1 seed:

./run_paper_experiments.sh <seed_number> <output_path>

If you don't want to run all the experiments performed in the paper, comment the unwanted experiments in the run_paper_experiments.sh script. For example, our best performing model is trained using the following lines:

#w2v2PT-fusion
errors=1
while (($errors!=0)); do
paiprun configs/main/w2v2-os-exps.yaml --output_path "${OUTPUT_PATH}/w2v2PT-fusion/${SEED}" --mods "${seed_mod}&global/wav2vec2_embedding_layer=enc_and_transformer&global/normalize=global"
errors=$?; done

The experiments outputs will be saved at <output_path>. A cache folder will be generated at the directory from which above line is called. Take into account that run_seeds.sh executes many experiments (all the presented in the paper), and repeats it 5 times (using different seeds for the random number generators), so it is expected that the process takes a very long time and drive space. We ran the experiments using multiple AWS P3.2x large instances, which have a Tesla V100 GPU.

Analyzing the outputs

The outputs saved at <output_path> can be examined from Python using joblib. For example, running:

import joblib
metrics = joblib.load('experiments/w2v2PT-fusion/0123/MainTask/DownstreamRavdess/RavdessMetrics/out')

will load the resulting metrics in the 'metrics' variable.

In this notebook, more examples of how the generated outputs can be analysed are given. Moreover, we provide the results from all our experiments in the experiments folder and the results.ipynb notebook will generate the tables of our paper.

🔥🔥🔥 Using pretrained models 🔥🔥🔥

⚠️WARNING: The models we trained, as most speech emotion recognition models, are very unlikely to generalize to datasets other than the used for training, which are recorded in clean conditions and with actors⚠️

Model Dataset Links
w2v2PT-fusion IEMOCAP Folds: 1 2 3 4 5
w2v2PT-fusion RAVDESS Model
w2v2PT-alllayers-global IEMOCAP Folds: 1 2 3 4 5
w2v2PT-alllayers-global RAVDESS Model
w2v2PT-alllayers IEMOCAP Folds: 1 2 3 4 5
w2v2PT-alllayers RAVDESS Model
Issa et al. eval setup RAVDESS Folds: 1 2 3 4 5

Cite as: Pepino, L., Riera, P., Ferrer, L. (2021) Emotion Recognition from Speech Using wav2vec 2.0 Embeddings. Proc. Interspeech 2021, 3400-3404, doi: 10.21437/Interspeech.2021-703

@inproceedings{pepino21_interspeech,
  author={Leonardo Pepino and Pablo Riera and Luciana Ferrer},
  title={{Emotion Recognition from Speech Using wav2vec 2.0 Embeddings}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={3400--3404},
  doi={10.21437/Interspeech.2021-703}
}

ser-with-w2v2's People

Contributors

mrpep avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ser-with-w2v2's Issues

extract_features() method

Where did u get extract_features() (line 155 in tasks/features_extractors.py) method from?

Im trying to implement your code on my own. However, Im getting this kind of error:
AttributeError: 'Wav2Vec2Embedding' object has no attribute 'extract_features'

The error occured when I call extract_features() method:
activations = self.model.extract_features(wav, None)

Warning: Cannot resolve tags ['Ravdess_max_audio_size']

when running shell file this warning occurs.

image

so 'Ravdess_max_audio_size' this variable needs to announce, I guess.
unlikely to this code, in ser-with-w2v2/configs/datasets/iemocap_impro.yaml, 'IEMOCAP_max_audio_size' is written

IEMOCAP_max_audio_size: 240000 #algunas corridas del exp1 no tienen esto, pero en el modelo large salta OEM sino

Can you tell me how to solve it?

When I run this code, I get the following error message. Can you tell me how to solve it?

Traceback (most recent call last):
File "/home/ljj/anaconda3/envs/SER/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3080, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 101, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 4554, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 4562, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'emotion'

NameError: name 'joblib' is not defined

data_i = joblib.load(data_i)

Traceback (most recent call last):
File "/opt/conda/bin/paiprun", line 8, in
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/paips/run_paips.py", line 50, in main
main_task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 471, in run
out_dict = self.__serial_map(iteration=iteration,run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 395, in __serial_map
outs.append(self.__serial_run(run_async=run_async))
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/home/emotion/w2v/tasks/normalize.py", line 85, in process
data_i = joblib.load(data_i)
NameError: name 'joblib' is not defined

There's no 'joblib', it should be imported at the top.

import joblib

this fixed error.

Solving Dependency issues

Hello! I love the project and especially your results, I am trying to replicate results locally and having some issues:

I tried installation and although I had the same issue as here, I was able to clear it on ubuntu. However, the actual dependency list itself is troublesome.

For reference, I installed the requirements on this repository (BTW, the dependency formatted as git~..github.com.. does not work, I installed stuff manually instead), then I went and installed everything on paips and afterwards from kahnfigh. Even with this, I still have issues with dependencies as such:

+ export LC_NUMERIC=en_US.UTF-8
+ LC_NUMERIC=en_US.UTF-8
+ export PYTHONHASHSEED=1234
+ PYTHONHASHSEED=1234
+ SEED=1
+ OUTPUT_PATH=./outputs/output
+ seed_mod=global/seed=1
+ errors=1
+ (( 1!=0 ))
+ paiprun configs/main/os-baseline.yaml --output_path ./outputs/output/none-egemaps/1 --mods global/seed=1
Warning: Cannot resolve tags ['normalization_axis', 'normalization_axis', 'normalization_axis', 'opensmile_exclude_features', 'opensmile_features', 'normalization_axis', 'normalization_axis', 'normalization_axis', 'opensmile_exclude_features', 'opensmile_features']
Traceback (most recent call last):
  File "/home/csuser/.local/lib/python3.10/site-packages/paips/utils/modiuls.py", line 34, in get_modules
    module = module_from_file(Path(module_path).stem,module_path)
  File "/home/csuser/.local/lib/python3.10/site-packages/paips/utils/modiuls.py", line 8, in module_from_file
    module = importlib.util.module_from_spec(spec)
  File "<frozen importlib._bootstrap>", line 568, in module_from_spec
AttributeError: 'NoneType' object has no attribute 'loader'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/csuser/.local/lib/python3.10/site-packages/paips/utils/modiuls.py", line 37, in get_modules
    module = module_from_folder(module_path)    
  File "/home/csuser/.local/lib/python3.10/site-packages/paips/utils/modiuls.py", line 22, in module_from_folder
    module = importlib.import_module(module_path.stem)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/csuser/Desktop/ser-with-w2v2/tasks/__init__.py", line 2, in <module>
    from .audio_dataset_from_directory import *
  File "/home/csuser/Desktop/ser-with-w2v2/tasks/audio_dataset_from_directory.py", line 4, in <module>
    from pymediainfo import MediaInfo
ModuleNotFoundError: No module named 'pymediainfo'

There are more, but just replicating the results is turning out to be a lot more difficult than I initially anticipated. Are there any solutions you have to help me easily get set up? Ideally something as simple as pip install requirement, run this torch script, and see results, but let me know. Thanks!

KeyError: 'opensmile'

KeyError: 'opensmile'

the error occurs with simple run sh.

Traceback (most recent call last):
File "/opt/conda/bin/paiprun", line 8, in
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/paips/run_paips.py", line 50, in main
main_task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 471, in run
out_dict = self.__serial_map(iteration=iteration,run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 395, in __serial_map
outs.append(self.__serial_run(run_async=run_async))
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/home/emotion/ser-with-w2v2/tasks/normalize.py", line 75, in process
grouped_data = data.loc[data[normalization_by] == g][col]
File "/opt/conda/lib/python3.8/site-packages/pandas/core/frame.py", line 3455, in getitem
indexer = self.columns.get_loc(key)
File "/opt/conda/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: 'opensmile'

So I start debugging and tried to print every necessary variable.

grouped_data = data.loc[data[normalization_by] == g][col]

this line has 4 necessary variables data, normalized_by, g, col

data is dataframe of Session1 IEMOCAP data.

below is sample of data

subject session start end duration emotion valence_raw activation_raw dominance_raw emotion_pair valence arousal annotations_selfreport annotations annotations_proportion sad_times sad_dur words_data syl_avg_dur syl_avg_rate syl_times annotations_entropy annotations_entropy_self_report annotations_full_agreement sentences classID
Ses01F 1 0 1.9455625 1.9455625 neutral 2.5 2.5 2.5 -,- , - ['neutral'] [['neutral'], ['neutral'], ['neutral']] {'neutral': 1.0, 'anger': 0.0, 'fear': 0.0, 'excited': 0.0, 'sadness': 0.0, 'frustration': 0.0, 'surprise': 0.0, 'other': 0.0, 'happiness': 0.0, 'disgusted': 0.0} [(0.45, 0.79), (0.8, 1.05)] 59 [[0.45, 0.79, 'EXCUSE'], [0.8, 1.05, 'ME']] 0.1933333333 5.172413793 [(0.45, 0.5), (0.51, 0.79), (0.8, 1.05)] 0 0 TRUE EXCUSE ME 2
Ses01F 1 0 1.3824375 1.3824375 neutral 2.5 2.5 2.5 -,- , - ['neutral', 'anger'] [['neutral'], ['neutral'], ['neutral']] {'neutral': 1.0, 'anger': 0.0, 'fear': 0.0, 'excited': 0.0, 'sadness': 0.0, 'frustration': 0.0, 'surprise': 0.0, 'other': 0.0, 'happiness': 0.0, 'disgusted': 0.0} [(0.53, 0.9)] 37 [[0.53, 0.9, 'YEAH']] 0.37 2.702702703 [(0.53, 0.9)] 0 0 TRUE YEAH 2
Ses01F 1 0 3.13025 3.13025 neutral 2.5 2.5 2.5 -,- , - ['neutral', 'anger'] [['neutral'], ['surprise'], ['neutral']] {'neutral': 0.6666666666666666, 'anger': 0.0, 'fear': 0.0, 'excited': 0.0, 'sadness': 0.0, 'frustration': 0.0, 'surprise': 0.3333333333333333, 'other': 0.0, 'happiness': 0.0, 'disgusted': 0.0} [(2.1, 2.15), (2.16, 2.28), (2.29, 2.32), (2.33, 2.82)] 69 [[2.1, 2.15, 'IS'], [2.16, 2.28, 'THERE'], [2.29, 2.32, 'A'], [2.33, 2.82, 'PROBLEM']] 0.136 7.352941176 [(2.1, 2.15), (2.16, 2.28), (2.29, 2.32), (2.33, 2.53), (2.54, 2.82)] 0.6365141683 0.6365141683 FALSE IS THERE A PROBLEM 2

normalized_byis 'subject'(str).
g is indexed of groups which is ['Ses01F' 'Ses01M'].

data[normalization_by] == g

so this line means selecting data if the 'subject' column identificate with g which can be 'Ses01F' or 'Ses01M'.

Here comes the problem. I look into col and column

column is a list; ['opensmile']. So col is str; 'opensmile'

this doesn't make sense.

Dataframe data doesn't have any of 'opensmile'.

AttributeError: 'DataFrame' object has no attribute 'annotations'

Hi, first of all thank you for your code

df_data['annotations_entropy'] = df_data.annotations.map(lambda x: list2entropy(x[:3]))

error appears at above line, when running ./run_seeds.sh <output_path>

Traceback (most recent call last):
File "/opt/conda/bin/paiprun", line 8, in
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/paips/run_paips.py", line 50, in main
main_task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 654, in process
out_dict = task.run()
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 459, in run
out_dict = self.__serial_run(run_async=run_async)
File "/opt/conda/lib/python3.8/site-packages/paips/core.py", line 346, in __serial_run
outs = self.process()
File "/home/emotion/w2v/tasks/IEMOCAP_reader.py", line 137, in process
df_data['annotations_entropy'] = df_data.annotations.map(lambda x: list2entropy(x[:3]))
File "/opt/conda/lib/python3.8/site-packages/pandas/core/generic.py", line 5487, in getattr
return object.getattribute(self, name)
AttributeError: 'DataFrame' object has no attribute 'annotations'

Could you please help solving this problem?

Question about 'paip' and 'dienen' packages

Recently, I'm trying to study your paper and codes, but I couldn't run shell file with following readme.txt.

So I try to run with python step by step with reconstructing your code, but I couldn't understand the exact role of 'paip' and 'dienen' packages which seem like you made by your own.

Could you please explain 'paip' and 'dienen' packages ?

Model checkpoints

Hi

Can you please share the model checkpoints to verify the model on the user input samples ?

Could you inform specific requirements?

First of all, your work is so helpful. So I wanna thank first.

Recently, I re-tried to run your code from beginning.

But there is too many errors due to the package version issue, I think this problem occurs because the version of packages and your own packages paip and dienen doesn't match.

So I'll be grateful if you inform specific version of each package that you used.

Error about Warning: Cannot resolve tags

wandb: Waiting for W&B process to finish, PID 4265
wandb: Program failed with code 1. Press ctrl-c to abort syncing.
wandb:
wandb: Find user logs for this run at: /home/emotion/w2v/wandb/run-20210924_090857-23nn6i97/logs/debug.log
wandb: Find internal logs for this run at: /home/emotion/w2v/wandb/run-20210924_090857-23nn6i97/logs/debug-internal.log
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb:
wandb: Synced ravdess_test: https://wandb.ai/jiminbot20/w2v_ser/runs/23nn6i97

  • errors=1
  • (( 1!=0 ))
  • paiprun configs/main/os-baseline.yaml --output_path outputs/os-baseline/0123 --mods global/seed=0123
    Warning: Cannot resolve tags ['normalization_axis', 'normalization_axis', 'normalization_axis', 'opensmile_exclude_features', 'opensmile_features', 'normalization_axis', 'normalization_axis', 'normalization_axis', 'opensmile_exclude_features', 'opensmile_features']
    2021-09-24 09:09:41,563 — Paips — INFO — Gathering tasks for MainTask
    2021-09-24 09:09:41,660 — Paips — INFO — Gathering tasks for DownstreamIEMOCAP
    2021-09-24 09:09:41,754 — Paips — INFO — Gathering tasks for IEMOCAPKFold
    2021-09-24 09:09:41,962 — Paips — INFO — Gathering tasks for DownstreamRavdess
    2021-09-24 09:09:42,192 — Paips — INFO — MainTask: Hash 96e591fb73c6dfc8d948bd574c75390d68a367d7
    2021-09-24 09:09:42,193 — Paips — INFO — MainTask: Running
    2021-09-24 09:09:42,304 — Paips — INFO — WANDBExperiment: Hash 199fa4bba9817007c54ae7129e3f7da054c9e272
    2021-09-24 09:09:42,304 — Paips — INFO — WANDBExperiment: Running

is this warning necessary?

'tf_utils' from 'tensorflow.keras.utils'

  • OUTPUT_PATH=./output/
  • ./run_paper_experiments.sh 0123 ./output/
  • export LC_NUMERIC=en_US.UTF-8
  • LC_NUMERIC=en_US.UTF-8
  • export PYTHONHASHSEED=1234
  • PYTHONHASHSEED=1234
  • SEED=0123
  • OUTPUT_PATH=./output/
  • seed_mod=global/seed=0123
  • errors=1
  • (( 1!=0 ))
  • paiprun configs/main/os-baseline.yaml --output_path ./output//none-egemaps/0123 --mods global/seed=0123
    Warning: Cannot resolve tags ['normalization_axis', 'normalization_axis', 'normalization_axis', 'opensmile_exclude_features', 'opensmile_features', 'normalization_axis', 'normalization_axis', 'normalization_axis', 'opensmile_exclude_features', 'opensmile_features']
    2023-09-22 19:59:21.988380: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
    /home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning:

TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).

For more information see: tensorflow/addons#2807

warnings.warn(
/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.11.0 and strictly below 2.14.0 (nightly versions are not supported).
The versions of TensorFlow you are currently using is 2.4.1 and is not supported.
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version.
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
warnings.warn(
Traceback (most recent call last):
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/paips/utils/modiuls.py", line 34, in get_modules
module = module_from_file(Path(module_path).stem,module_path)
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/paips/utils/modiuls.py", line 8, in module_from_file
module = importlib.util.module_from_spec(spec)
File "", line 553, in module_from_spec
AttributeError: 'NoneType' object has no attribute 'loader'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/paips/utils/modiuls.py", line 37, in get_modules
module = module_from_folder(module_path)
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/paips/utils/modiuls.py", line 22, in module_from_folder
module = importlib.import_module(module_path.stem)
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 843, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/yyccll/pyPro/ser-with-w2v2/tasks/init.py", line 8, in
from .dienen_tasks import *
File "/home/yyccll/pyPro/ser-with-w2v2/tasks/dienen_tasks.py", line 6, in
from dienen import Model
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/dienen/init.py", line 3, in
from .core.model import Model
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/dienen/core/init.py", line 1, in
from .architecture import Architecture
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/dienen/core/architecture.py", line 2, in
from .layer import Layer
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/dienen/core/layer.py", line 4, in
import tensorflow_addons as tfa
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/init.py", line 24, in
from tensorflow_addons import callbacks
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/callbacks/init.py", line 17, in
from tensorflow_addons.callbacks.average_model_checkpoint import AverageModelCheckpoint
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/callbacks/average_model_checkpoint.py", line 18, in
from tensorflow_addons.optimizers.average_wrapper import AveragedOptimizerWrapper
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/optimizers/init.py", line 30, in
from tensorflow_addons.optimizers.discriminative_layer_training import (
File "/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow_addons/optimizers/discriminative_layer_training.py", line 34, in
from tensorflow.keras.utils import tf_utils
ImportError: cannot import name 'tf_utils' from 'tensorflow.keras.utils' (/home/yyccll/.conda/envs/TF8/lib/python3.8/site-packages/tensorflow/keras/utils/init.py)

I have installed all the packages you recommended, while running under TF2.4.1 and keras2.4.3 reported errors, I made sure that my keras can run through other samples.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.