Comments (7)
@zrhsu0911 If I understand correctly the task you are interested in (taking two protein sequences as input and predicting whether or not they should bind by outputing a single probability) - it's not supported out of the box by the existing ProteinBERT package. You will have to use ProteinBERT to create your own architecture with keras. For example, you could make a model comprised of two instances of ProteinBERT, each taking a separate sequence as input, and then using the final layer's global embeddings for these two sequences and add a final dense layer that decides whether or not they bind based on these representations (and fine-tune ProteinBERT's pretrained weights on your data).
from protein_bert.
@zrhsu0911 If I understand correctly the task you are interested in (taking two protein sequences as input and predicting whether or not they should bind by outputing a single probability) - it's not supported out of the box by the existing ProteinBERT package. You will have to use ProteinBERT to create your own architecture with keras. For example, you could make a model comprised of two instances of ProteinBERT, each taking a separate sequence as input, and then using the final layer's global embeddings for these two sequences and add a final dense layer that decides whether or not they bind based on these representations (and fine-tune ProteinBERT's pretrained weights on your data).
from protein_bert.
@nadavbra
Thank you for your swift response to my question, maybe I can just use this only for embeddings.
from protein_bert.
Yes, you can definitely just use ProteinBERT for extracting embeddings and then use a simple ML algorithm. That would be the simplest approach.
from protein_bert.
@nadavbra Hi,
I have a small question to ask. In the context of the downstream protein-protein interaction (PPI) task, which benchmark dataset should be chosen for fine-tuning? After browsing through the benchmark datasets provided, I found that only ProFET_NP_SP and signalp_binary have label data in a format consistent with the PPI task (label: 0 or 1). If we were to perform fine-tuning, which dataset should we choose? Or, in your and your team's opinion, would it be better to use the fine-tuned model or directly use the pre-trained model provided by you?
best regard!
from protein_bert.
We don't have a benchmark for PPI in ProteinBERT (we never tested that in our paper) - that's a general bioinformatics question that is outside my area of expertise.
I'm not sure I understand the question about pre-trained vs. fine-tuned model. Fine-tuned for what? We only provide the pre-trained model which I expect would be useful for this task after fine-tuning.
from protein_bert.
Sorry, I may not have explained it clearly. My intention is to use the ProteinBERT model to extract protein features and improve the accuracy of protein-protein interaction (PPI) predictions. However, I am unsure whether it would be better to directly use the pre-trained model provided by your team or fine-tune ProteinBERT before utilizing it. If I want to fine-tune ProteinBERT, but your team does not provide a benchmark dataset for PPI fine-tuning, it hinders the progress of my next steps. Do you have any suggestions regarding this issue?
Wish you a pleasant day!
from protein_bert.
Related Issues (20)
- Failing to get the weights from the dedicated github repo HOT 5
- Use ProteinBERT with Own Dataset HOT 3
- Original h5 file HOT 5
- loss plot during pretraining HOT 1
- signal peptide detection HOT 1
- KeyError: "Unable to open object (object 'test_set_mask' doesn't exist)" HOT 6
- How to extract the embedding of an amino acid? HOT 10
- Graph execution error HOT 6
- Extract local and global representation using finetune model HOT 1
- Running Benchmarks HOT 4
- Evaluation on larger data set HOT 6
- Using vector representations in the "weights" parameter in the "embedding" section of an LSTM model after fine-tuning my own data HOT 1
- Failing to extract global embedding (1,15599) -> (1,512) HOT 1
- What do the settings mean? HOT 3
- Error when trying to run the finetuning code given in the jupyter notebook HOT 2
- ValueError, set_weights error
- model_generation.py list is not callable error HOT 2
- GO annotations during fine tuning HOT 1
- Missing MajorPTMs train CSV file HOT 1
- Can't get proteinBERT to run on GPU HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from protein_bert.