Comments (3)
Ah yes, it's fine, sorry about that.
Just to explain why it happens:
Initially ProteinBERT fine-tunes only the last added fully-connected layer, and only then does it start to fine-tune all layers. When it makes this transition, the weights of the optimizer are no longer compatible (because there are more layers), so the optimizer weights start from scratch.
To make it clear, I'm talking only about the weights of the optimizer (which determine momentum etc.), not the weights of the actual model which of course transition and continue to train from the same state.
Hope it's more clear now.
from protein_bert.
It depends on the context. Can you send the full stdout/stderr?
from protein_bert.
Here's a full output trace:
14945 training set records, 1661 validation set records, 4152 test set records.
[2021_08_26-10:38:02] Training set: Filtered out 0 of 14945 (0.0%) records of lengths exceeding 510.
[2021_08_26-10:38:03] Validation set: Filtered out 0 of 1661 (0.0%) records of lengths exceeding 510.
[2021_08_26-10:38:03] Training with frozen pretrained layers...
2021-08-26 10:38:03.798028: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-26 10:38:04.748114: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9658 MB memory: -> device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:18:00.0, compute capability: 7.5
2021-08-26 10:38:04.749064: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 9658 MB memory: -> device: 1, name: GeForce RTX 2080 Ti, pci bus id: 0000:3b:00.0, compute capability: 7.5
/usr/local/lib/python3.6/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
2021-08-26 10:38:07.979178: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/40
2021-08-26 10:38:14.682242: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8100
468/468 [==============================] - 27s 41ms/step - loss: 0.0963 - val_loss: 0.0779
Epoch 2/40
468/468 [==============================] - 17s 37ms/step - loss: 0.0742 - val_loss: 0.0627
Epoch 3/40
468/468 [==============================] - 17s 37ms/step - loss: 0.0733 - val_loss: 0.0701
Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.0024999999441206455.
Epoch 4/40
468/468 [==============================] - 17s 37ms/step - loss: 0.0598 - val_loss: 0.0688
Epoch 00004: ReduceLROnPlateau reducing learning rate to 0.0006249999860301614.
[2021_08_26-10:39:26] Training the entire fine-tuned model...
[2021_08_26-10:39:33] Incompatible number of optimizer weights - will not initialize them.
Epoch 1/40
468/468 [==============================] - 46s 87ms/step - loss: 0.0653 - val_loss: 0.0608
Epoch 2/40
468/468 [==============================] - 39s 84ms/step - loss: 0.0485 - val_loss: 0.0556
Epoch 3/40
468/468 [==============================] - 39s 84ms/step - loss: 0.0333 - val_loss: 0.0717
Epoch 00003: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05.
Epoch 4/40
468/468 [==============================] - 39s 84ms/step - loss: 0.0202 - val_loss: 0.0545
Epoch 5/40
468/468 [==============================] - 39s 84ms/step - loss: 0.0139 - val_loss: 0.0590
Epoch 00005: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 6/40
468/468 [==============================] - 39s 84ms/step - loss: 0.0103 - val_loss: 0.0576
[2021_08_26-10:43:38] Training on final epochs of sequence length 1024...
[2021_08_26-10:43:38] Training set: Filtered out 0 of 14945 (0.0%) records of lengths exceeding 1022.
[2021_08_26-10:43:39] Validation set: Filtered out 0 of 1661 (0.0%) records of lengths exceeding 1022.
/usr/local/lib/python3.6/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
935/935 [==============================] - 85s 86ms/step - loss: 0.0166 - val_loss: 0.0581
/usr/local/lib/python3.6/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
Test-set performance:
# records AUC
Model seq len
512 4152 0.99483
All 4152 0.99483
Confusion matrix:
0 1
0 3446 32
1 36 638
from protein_bert.
Related Issues (20)
- Failing to get the weights from the dedicated github repo HOT 5
- Use ProteinBERT with Own Dataset HOT 3
- Original h5 file HOT 5
- loss plot during pretraining HOT 1
- signal peptide detection HOT 1
- KeyError: "Unable to open object (object 'test_set_mask' doesn't exist)" HOT 6
- How to extract the embedding of an amino acid? HOT 10
- Graph execution error HOT 6
- Extract local and global representation using finetune model HOT 1
- Running Benchmarks HOT 4
- Evaluation on larger data set HOT 6
- Using vector representations in the "weights" parameter in the "embedding" section of an LSTM model after fine-tuning my own data HOT 1
- Failing to extract global embedding (1,15599) -> (1,512) HOT 1
- What do the settings mean? HOT 3
- Error when trying to run the finetuning code given in the jupyter notebook HOT 2
- ValueError, set_weights error
- model_generation.py list is not callable error HOT 2
- GO annotations during fine tuning HOT 1
- Missing MajorPTMs train CSV file HOT 1
- Can't get proteinBERT to run on GPU HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from protein_bert.