Giter VIP home page Giter VIP logo

Comments (18)

chmathys avatar chmathys commented on June 18, 2024

from tapas.

 avatar commented on June 18, 2024

Dear Dr. Mathys,

Thanks for your quick replying.

If the belief updates are driven only by inputs, then how about the prediction error da1(k)? The prediction error da1(k) is calculated between the actual input (0,1) and the prediction before experiencing the trial outcome (mu1hat(k)). This mu1hat(k) should be largely related with the response rather than the stimulus input, because the response is the consequence of this prediction.

But from the model, I got the prediction error (da1(k)) mainly driven by the stimulus input as well, and even I assume they are totally random responses, the prediction error doesn't change (see figure). Does that make sense?
Or I am wondering whether there is any parameter from the model that can represent the trial-wise change of the choice probability?
figure2

Thank you very much again

Best,
Bin

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

 avatar commented on June 18, 2024

Dear Dr. Mathys,

Thank you so much for your kind answer. I realized that the choice prediction error is what I want, rather than the outcome prediction error:)

And I have one more question about the positive or negative values of the prediction errors (i.e., precision-weighted PE or choice PE). When I perform the parametric modulation using these parameters in SPM, should I use the absolute value or the positive/negative value of them?

Thanks again,
Best,

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

adonda10 avatar adonda10 commented on June 18, 2024

Dear Chris,

a related question to using the HGF with real responses and input:

(Simple context of the binary perceptual model and unit square sigmoid as response model)
Do we use at all the posterior means on some parameters from the perceptual model (e.g. omegas?)

In three of our participants using the real responses + input for the combined response+perceptual model we obtain a posterior mean on the omega (level2) that is very different (-8) than the prior value (-3 ). Also in those three participants the HGF variables look quite flat (mu(2) and implied learning rate(1)). If we fix the omegas to the posterior means from the perceptual model, is that valid? Or should we go with the Bayes-optimal parameter estimates from the combined model, even if the time course of HGF variables looks flat?

The behaviour seems ok, though (see attached figures: perceptual only / combined real response + perceptual)

Thanks so much in advance

Best

A.Donda

COMBINED:
combined_response_perceptual
PERCEPTUAL ONLY:
perceptual

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

adonda10 avatar adonda10 commented on June 18, 2024

Dear Chris,

thanks so much. That is helpful.

I guess that the main question we have is (thanks in advance!):

Why is it valid for us to decide how much variance we accept in the priors for the omegas (omega2 = -3, var = 1? or 16?). Is it ok to decide that omega2 = -8 is not valid? But then, why did the HGF find that as posterior mean for omega2?

Thanks so much!

Best wishes,
Maria

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

adonda10 avatar adonda10 commented on June 18, 2024

Fantastic,

thanks so much.

Best wishes

from tapas.

ianthe00 avatar ianthe00 commented on June 18, 2024

Dear Dr. Mathys,

working now with the continuous response model (gaussian_obs), and 2 levels (mu1, mu2), everything looks fine and the LME is good as well. However we often get mu2 (volatility) values that drop from an initial value of 1 to negative values (e.g. -3 up to -10). This can be changed by constraining omega2. However, we wonder, whether this is a big issue. Is it a problem to get negative mu2 values?
Thanks in advance

Best,
Ianthe

hgf_continuous

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

ianthe00 avatar ianthe00 commented on June 18, 2024

True, thanks so much.

Ianthe

from tapas.

ianthe00 avatar ianthe00 commented on June 18, 2024

Dear Chris,

on December 2018 you advised to "tighten your prior [on the omegas] by reducing the prior variance in the config file", based on the binary categorical model results we shared.

Our query now is:
If a broad variance on the omegas works well for most participants (e.g. omsa [ 4 4]), except in 2-3 participants in which we would need to constraint the prior on the omegas, what is the common approach you would advise to follow?
(A) Reduce the variance on the omegas for all participants?
(B) Or just for those 2-3 and report it in the table of priors in a publication?

Thanks
Best,
Antonia

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

ianthe00 avatar ianthe00 commented on June 18, 2024

Dear Chris,

thank you for the quick & insightful reply.
I absolutely agree with you and (B) is not a valid option as all participants should have the same priors. We just can't seem to understand why the HGF model we're using (tapas_ehgf_binary, version 6.0.0) breaks down after a few trials in 2-3 participants with very good behavioural performance (see pic below: they learn quite well, but the bottom plot learning rate steps in black are too high).

Thanks. Yes, we're concerned that reducing the prior variances on the omegas for everybody will be too restrictive to properly estimate parameters and observe effects. But we will explore this further.

We will also try other changes in the priors. Otherwise, as you suggest, we will just report that those broad omega priors don't work well in 2-3 people and speculate why that may be - then exclude those participants from the results.

Thanks so much again for the toolbox and for maintaining this site!

Best
Antonia

untitled

from tapas.

ianthe00 avatar ianthe00 commented on June 18, 2024

Dear Chris,

after trying many things to try and get the ehgf_binary model to not diverge in the case (figure) we shared in the previous message, we haven't succeeded. We could exclude this (and other) participants from the final analysis, however we wonder whether you could share some insight?

Changing the prior variances on the omegas does not change much this time (as opposed to our experience with the old hgf_binary).

  • We tried to estimate omega2 and omega3 from simulated data and omega2 recovers well, omega3 does not (in our attempts).
  • So then we considered fixing omega3, but keeping kappa(1) free (in addition to omega2 and resp model zeta). However, the model still diverges in the participant example shown above.
  • We also kept free omega2, omega3, kappa(1), zeta, and used broad / narrow prior distributions, but this didn't fix the participant model fit as shown above.

We checked everything, code, simulations, config file and could not get the ehgf_binary model to work for the participant (and others) shown above.

Any insight into how to avoid that the ehgf_binary model explodes in some cases would be greatly appreciated.

Thanks in advance

Best,
Antonia

from tapas.

chmathys avatar chmathys commented on June 18, 2024

from tapas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.