Comments (7)
Looking at it, I feel like the PredictionResult as a light wrapper is perfectly fine as is, one might be able to condense it by just having this as a dict {condition_id: PredictionConditionId}. Regarding the PredictionConditionResult: This is currently heavily tailored to Amici with the sensitivities, so not entirely sure whether we would even want all the things there.
Is a prediction result as a (PEtab measurements table)-like dataframe sufficient? This would make handling the predictions for e.g. plotting easier than the current implementation, at least. Then extra AMICI/PEtab-specific things can be optional columns.
entity_id | value | [optional] time | [optional] condition_id | [optional] * |
---|---|---|---|---|
species_A | 5 | 2 | cond1 | data1 |
*
model/problem-specific things provided by the simulator, e.g. PEtab dataset ID.
Then an ensemble prediction is one big dataframe with an additional vector_id
column, or a list of dataframes.
This would make handling the predictions for e.g. plotting much easier than the current implementation. Currently, all data given a specific observable and a specific experiment is retrieved like (summary
is EnsemblePrediction.prediction_summary
):
pyPESTO/pypesto/visualize/sampling.py
Lines 176 to 181 in 34e89b3
from pypesto.
Is a prediction result as a (PEtab measurements table)-like dataframe sufficient?
We would obviously somehow need to allow for not only observables to be put there, otherwise, I think you are right, would make handling visualization much easier.
from pypesto.
Is there a downside to coupling it to PEtab? The upcoming PEtab Result format could cover ensemble predictions as simulation experiments. This could reduce and shift this part
to create a prediction a "predictor" is currently needed, which in my experience is so individualised that I find it questionable how much work we actually save people
into a nicer/simpler format.
I agree, ensembles themselves can be completely decoupled from AMICI or predictions, and simply serve as a thin wrapper around a NumPy array of parameter vectors. Something useful for such a wrapper would be a nice way specify prediction experiments, e.g. how to tell pyPESTO to "create a new ensemble from the current ensemble that predicts a knockout experiment, by setting this parameter to zero". I guess having the ensemble be a pandas.DataFrame
could enable this, e.g.
knockout_ensemble = ensemble.copy()
knockout_ensemble["knocked_out_parameter"] = 0
re: supported simulators, if PEtab is used to specify the ensemble predictions, then we could use the petab.simulate.PetabSimulator
[1] as the base class for the simulator, such that any simulator that implements enough of the PetabSimulator
interface can be used. This base class might need some work.
from pypesto.
Thanks for starting this discussion. To reduce complexity, I would suggest to first tackle the higher level questions. What is generated from a parameter ensemble or model ensemble? Is there a common structure that can/should be represented in pypesto? Is the current EnsemblePrediction
, PredictionResult
, PredictionConditionResult
what we want? What will be done with that? The question of support for different types of models and simulators, and where which functionality should be implemented would come further down the road for me.
Currently, an Ensemble is only considered an accumulation of vectors, implicitly assuming that general model structure is always the same.
I think this covers the main use case in pypesto already, but once there exists some concept of model in pypesto, it shouldn't be hard to support the more general case. In case of a bigger refactoring, I would preventively rename Ensemble
to ParameterEnsemble
, so a ModelEnsemble
can be introduced once required.
Is there a downside to coupling it to PEtab?
It wouldn't be usable for any non-PEtab applications. Nevertheless, it might be better to have some easy-to-use functionality coupled to PEtab, than having some practically unusable general concept. In any case, it should be made clear that it is (supposed to be) tied to PEtab.
from pypesto.
What is generated from a parameter ensemble or model ensemble?
I'd be happy to hear more about the use cases for a model ensemble first. If it's the calibrated models from model selection, it might make more sense to move some of this to PEtab Select, e.g. s.t. a PEtab Select model ensemble can be represented by a collection of pyPESTO ParameterEnsemble
s.
I would preventively rename
Ensemble
toParameterEnsemble
, so aModelEnsemble
can be introduced once required.
👍
from pypesto.
I'd be happy to hear more about the use cases for a model ensemble first.
The Petab select case was what I mainly thought about. I would think that moving it to PEtab select (or parts of it) makes sense, but would should then clarify what we understand under Ensemble, as Daniel mentioned
I would preventively rename Ensemble to ParameterEnsemble, so a ModelEnsemble can be introduced once required.
But in Petab select we would probably also need some way to create them? 🤔
What is generated from a parameter ensemble or model ensemble? Is there a common structure that can/should be represented in pypesto?
I really think that a very large portion of predictions boils down to "sbml_id" at given timepoints that might not agree with measurements under specific conditions. And I do think there can/should be a structure to represent this in pypesto.
Is the current EnsemblePrediction, PredictionResult, PredictionConditionResult what we want?
Looking at it, I feel like the PredictionResult
as a light wrapper is perfectly fine as is, one might be able to condense it by just having this as a dict {condition_id: PredictionConditionId
}. Regarding the PredictionConditionResult
: This is currently heavily tailored to Amici with the sensitivities, so not entirely sure whether we would even want all the things there.
from pypesto.
Looking at it, I feel like the
PredictionResult
as a light wrapper is perfectly fine as is, one might be able to condense it by just having this as a dict {condition_id:PredictionConditionId
}. Regarding thePredictionConditionResult
: This is currently heavily tailored to Amici with the sensitivities, so not entirely sure whether we would even want all the things there.
I am not sure if there is much added value in any of those. So far, the main thing is: 1) creating a parameter ensemble, 2) running simulations and collecting some outputs, and 3) computing and visualizing some statistics. The last step is probably most easily done directly with pandas/seaborn once everything is in a properly organized dataframe.
(This shouldn't exclude the option of extending the PEtab visualization functionality to allow plotting things like confidence bands based on some PEtab visualization file.)
Is a prediction result as a (PEtab measurements table)-like dataframe sufficient?
I'd say so.
This would make handling the predictions for e.g. plotting much easier than the current implementation.
Yes.
from pypesto.
Related Issues (20)
- ImportError: cannot import name 'gaussian' from 'scipy.signal' HOT 5
- What's the purpose of `Ensemble.predictions`? HOT 1
- Handling of simulation failures in AmiciPredictor
- Frequent timeouts of RTD builds HOT 3
- Waterfall plot error: `order_by_id` and `start_indices` do not work together
- Optimization with conditions specific for observables fails HOT 4
- Frequent timeouts for `mac (3.11)` workflow runs HOT 3
- Check pypesto/sample/geweke_test.py calculate_zscore/spectrum0/spectrum
- Frequent readthedocs timeout HOT 1
- `pyjulia` is no longer maintained
- Roadrunner petab test case 0018 fails on Mac HOT 3
- Lazy loading of optimization results from HDF5
- Aggregated Objective does not return expected outputs using `return_dict` HOT 7
- NegLogParameterPriors evaluates prior density also on fixed parameters HOT 6
- AmiciObjective: For sensi_orders>0, check that sensitivities wrt all relevant parameters are computed
- PetabImporter does not respect estimate=0 HOT 3
- KeyError for parameter with amici prefix HOT 2
- Lots of warnings in test suite HOT 2
- Using PyPesto with jax HOT 11
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pypesto.