Comments (3)
Why do you want to recover the original amino-acid sequence from the representations?
ProteinBERT, like all language models, is an autoencoder, meaning that it tries to predict the correct amino-acid sequence from a noised one, so in principle if you take the output of the last hidden layer (from the local representations) and feed it into the output layer of the model, you should get predictions that are similar to the input sequence (but reflect the model's predictions of which input residues make more or less sense, so it won't necessarily be exactly the same as the input). But then again, why would you want to get the input specifically?
from protein_bert.
Thank you for your answer.
My explanation was incorrect. My goal is not to recover the original input data (1D amino acid sequence), but to predict a 1D amino acid sequence using the output of some last hidden layer (from a local representation).
I understand that if I feed the output layer of the model, I can get a prediction similar to the input sequence, but can you show me exactly how to do this with sample code?
from protein_bert.
I'm sorry but I don't think I'll have the time to write this code. It sounds like a combination of pretty standard keras operations: you basically need to slice the relevant layers from your output and model (which is specified here) and invoke the relevant layer on the relevant output. To learn more about how to use keras, they have pretty good documentations and tutorials on their website.
from protein_bert.
Related Issues (20)
- Failing to get the weights from the dedicated github repo HOT 5
- Use ProteinBERT with Own Dataset HOT 3
- Original h5 file HOT 5
- loss plot during pretraining HOT 1
- signal peptide detection HOT 1
- KeyError: "Unable to open object (object 'test_set_mask' doesn't exist)" HOT 6
- How to extract the embedding of an amino acid? HOT 10
- Graph execution error HOT 6
- Extract local and global representation using finetune model HOT 1
- Running Benchmarks HOT 4
- Evaluation on larger data set HOT 6
- Using vector representations in the "weights" parameter in the "embedding" section of an LSTM model after fine-tuning my own data HOT 1
- Failing to extract global embedding (1,15599) -> (1,512) HOT 1
- What do the settings mean? HOT 3
- Error when trying to run the finetuning code given in the jupyter notebook HOT 2
- ValueError, set_weights error
- model_generation.py list is not callable error HOT 2
- GO annotations during fine tuning HOT 1
- Missing MajorPTMs train CSV file HOT 1
- Can't get proteinBERT to run on GPU HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from protein_bert.