Comments (5)
Hello, using another dataset with a 3x3 design I noticed again unexpected behaviour. The example protein here is present in 2 conditions but the groupAverage is used in all comparisons (even in alk_3d - alk_2d).
I attach sample code to reproduce and the initial peptide level data. I also noticed that TMP summarization creates artifacts where the intensities of some proteins are identical in all samples. This happens for proteins with one-sample peptides (like g004095, g012327 and others) since the row-median subtraction sets all rows to 0. This issue is also mentioned here when using median-polish for microarrays https://doi.org/10.1186/1471-2105-11-553.
testcode_prolfqua.txt
quantms_msstats_input.csv.gz
from prolfqua.
@Roman-Si
I just pushed two commits that fix the error you have reported:
In the figure below, you can see that the contrast for protein g99273 is now correct.
Thank you for trying prolfqua and reporting problems.
Please update prolfqua
install.packages('remotes')
remotes::install_github('fgcz/prolfqua', dependencies = TRUE)
from prolfqua.
Thank you for sharing the reproducible example and spotting the problem. Yes it is unfortunately a bug in prolfqua.
Since you have 3 groups the model typically looks like this:
Call:
lm(formula = formula, data = x)
Coefficients:
(Intercept) condition_Psfilm condition_Psfoam
23.743 1.847 2.131
and the linear functions to compute the contasts, look like this:
> contr$get_linfct()
(Intercept) condition_Psfilm condition_Psfoam
Psfoam - Psfilm 0 -1.0 1.0
Psfoam - glucose 0 0.0 1.0
Psfilm - glucose 0 1.0 0.0
However for g99273
the modelling result is:
> modg99273
# A tibble: 1 × 9
# Groups: protein_Id [1]
protein_Id data linear_model exists_lmer isSingular df.residual sigma nrcoef nrcoeff_not_NA
<chr> <list> <list> <lgl> <lgl> <dbl> <dbl> <int> <int>
1 g99273 <tibble [8 × 8]> <lm> TRUE TRUE 1 0.361 2 2
and the linear model looks like this:
> modg99273$linear_model[1]
[[1]]
Call:
lm(formula = formula, data = x)
Coefficients:
(Intercept) condition_Psfoam
26.160 -1.043
The intercept, in the model for g99273, estimates not the glucose group but the Psfilm group. Hence, we should have used a linear function, for Psfoam - Psfilm , i.e. c(0, 1) only.
An ad hoc fix would be:
mod <- prolfqua::build_model(
lfqdata$data,
formula_Condition,
subject_Id = lfqdata$config$table$hierarchy_keys())
mod$modelDF <- mod$modelDF |> dplyr::filter(nrcoeff_not_NA == 3)
By this we remove all the models where not all 3 model parameter could not be estimated.
from prolfqua.
New release
https://github.com/fgcz/prolfqua/releases/tag/v.1.2.4
from prolfqua.
Thanks a lot for reporting the issue.
You are observing two things. One is that we are getting group differences, although we do not have any observations in one of the groups. This is because you are using the ContrastMissing function, which estimates a LOD, substitutes the missing values with the LOD, and then computes the group averages.
Then, you merged the contrast results using merge_contrasts_results
. You now have two types of models in your data frame ( a linear model for the contrasts, which could be estimated using linear models, and the group average model for all those proteins with an excessive number of missing values).
You can use the modelName
column in the merged
data.frame to see how each contrast was estimated.
mC <- ContrastsMissing$new(lfqdata = lfqdataNormalized, contrasts = Contrasts)
merged <- prolfqua::merge_contrasts_results(prefer = contr,add = mC)$merged
The other problem is that we have standard error
estimates that are 0, which is indeed caused by using Medpolish -> that the protein level estimates within the group are all equal (22.0). I do not like error estimates 0 (although running the contrasts through ContrastsModerated should shrink them to more reasonable estimates), and therefore
, I updated the code for the contrast computation in ContrastsMissing
, so that if std.error
is 0, I am using the 75% Quantile of std.error
of all proteins. I just pushed those changes.
You can use a different method for aggregation (sum_topN
or lmrob
). But those have their own issues.
Lastly, I suggest you apply the moderation to the merged contrasts:
merged <- prolfqua::merge_contrasts_results(prefer = contr,add = mC)$merged
contr <- prolfqua::ContrastsModerated$new(merged)
from prolfqua.
Related Issues (20)
- color densities using factor variable
- Glm support HOT 1
- add in-silico talk HOT 1
- volcano plot with fixed ylim (scales) cannot be "freed up" with scales="free" HOT 5
- Issue with setup_analysis in comparing2groups vignette HOT 11
- review ContrastsMissing get_contrasts HOT 3
- remove dependecy on DT
- remove dependencies from basic model and interaction model
- simulate additional data
- tidyMQ_ProteinGroups fiunction is not working HOT 3
- Please provide an example how to create input data by hand HOT 9
- how to enter a data table HOT 1
- Track gap filling / imputed values
- How can I re-arrange my samples in a plot in prolfqua HOT 1
- Apply moderation to merged contrasts or to linear model only? HOT 1
- build_model with continous fixed effect
- prolfqua_MaxQuant output
- Lactylation modifications in MaxQuant
- Statistical analysis in two conditions and 2 bioreplicates
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from prolfqua.