Comments (7)
Thank you very much Dr. Vincent for your tremendous help!!!
from delay-discounting-analysis.
Hi. I'm glad you find it useful.
If logk is -5, it is not necessarily that low. It can be useful to calculate the half-life, which is the delay at which the present subjective value its halved. The half life is 1/k, or equally 1/exp(logk). So for your data that would be 1/k = 1/exp(-5) = 148.4 days. So I don't think this is a surprisingly low number.
But it might make sense to double check if you have a lot of participants who are always choosing the immediate reward. Some datasets have this kind of issue.
In terms of models, you could either:
- run the separate model on all the data, or
- run a group model on condition 1 data, and another group model on the condition 2 data.
In the first option, there is no hierarchical inference and no Bayesian shrinkage, but that doesn't make it right or wrong. It depends on your purpose and the experiment.
Out of interest, is this repeated measures data? Is each participant present in both condition 1 and condition 2?
from delay-discounting-analysis.
Hi. I'm glad you find it useful.
**Thank you so much for the prompt reply! That toolbox is super useful. **
If logk is -5, it is not necessarily that low. It can be useful to calculate the half-life, which is the delay at which the present subjective value its halved. The half life is 1/k, or equally 1/exp(logk). So for your data that would be 1/k = 1/exp(-5) = 148.4 days. So I don't think this is a surprisingly low number.
I might calculate the half-life wrong. For a logk=-5, I calculated k = 10^(-5), instead of =e^(-5). I will correct that.
But it might make sense to double check if you have a lot of participants who are always choosing the immediate reward. Some datasets have this kind of issue.
Actually, my participants seem very patient and choose a lot of future values. Those participants even have logK lower than -10. This worries me.
In terms of models, you could either:
- run the separate model on all the data, or
- run a group model on condition 1 data, and another group model on the condition 2 data.
In the first option, there is no hierarchical inference and no Bayesian shrinkage, but that doesn't make it right or wrong. It depends on your purpose and the experiment.
Out of interest, is this repeated measures data? Is each participant present in both condition 1 and condition 2?
All my participants were presented with both condition 1 and condition 2. Should I divide the conditions for each participant and run model seperately, as suggested?
Thank you!
from delay-discounting-analysis.
So in terms of very low discount rates like log(k)=-10, it is hard to know if that is problematic or not. Firstly, it depends on the participant population and the experiment. For example, if they are addicts in withdrawal then that would be worrying. If they are healthy older adults then possibly less of a problem. Secondly, it is useful to know if your discounting questions are broad enough to cover a useful range of logk values. An obvious way to test this is to see if there are many people who always choose immediate, or always choose delayed rewards. Another, more quantitative way of doing that would be to work out the maximum and minimum range of discount rates your discounting measure is sensitive to.
If you wanted to work that out, you could do it 2 ways:
- Through calculation. Set the present subjective value of A and B equal, ie RA/(1+kDA) = RB/(1+kDB) then rearrange for k. So you'd be able to plug in the values for each discounting choice and find the k where they are indifferent.
- You could also just do it through simulation. See Figure 2 in my pre-print here https://psyarxiv.com/xdg6y The (Python) code to do it is relatively simple https://mfr.osf.io/render?url=https://osf.io/b3a5g/?direct%26mode=render%26action=download%26mode=render
from delay-discounting-analysis.
All my participants were presented with both condition 1 and condition 2. Should I divide the conditions for each participant and run model separately, as suggested?
There is no correct answer, but running the separate model would be simplest.
If you wanted to go ultra-Bayesian it would be possible to create a new model which incorporates the knowledge you have about the repeated measures structure of the data. But this isn't in the Matlab toolbox. If that was interesting and you think it would meaningfully add something to your results, I would be happy to collaborate. I have run a similar hierarchical repeated measures across 2 groups model (in Python) with other collaborators before. But the simplest and fastest solution is to use the separate model.
from delay-discounting-analysis.
Thank you very much Dr. Vincent. This extremely helpful.
Another question I have is regarding the output. It conviniently provide two ".csv" files called the "parameterEstimates" and "parameterEstimates_ALT". I failed to identify the difference between them. Moreover, the "parameterEstimates" has a column with "logk_median". This a natural log value (eg. k=e^logk_median) or a log value with the base of 10 (eg. k=10^logk_median)?
from delay-discounting-analysis.
I think that parameterEstimates_ALT
was something experimental - I can't quite remember now as it's been a while since I've maintained this package. I've stopped using Matlab, in favour of Python and Julia.
All the logs should be base e. That is the default used by both JAGS and Matlab. So they are all natural logarithms.
from delay-discounting-analysis.
Related Issues (20)
- make more plotting functions available to user HOT 2
- update class hierarchy for models with subjective time functions
- Hyperboloid model: incorporate subjective time aspect
- new discount function: exp-log HOT 1
- generalise the Choice Function
- calculate `log_lik`
- fix Ebert & Prelec bug with AUC measure
- add beta-delta discount function (Laibson, 1997)
- fail to run the 'run_me' file HOT 1
- remove STAN related stuff HOT 1
- Calculate WAIC HOT 2
- new model: Hyperboloid type B
- new model: Hyperbolic with non-linear utility function
- sorry~ i have a question about mcmc HOT 2
- Would it be possible to adopt this toolbox for social-discounting task? HOT 3
- Error: matjags HOT 7
- delay discounting plots interpretation HOT 10
- Ebert-Prelec Constant Sensitivity model HOT 1
- Error running ModelHierarchicalEbertPrelec
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from delay-discounting-analysis.