This is not meant to be a stab at implementing a very general GLM bolted-on to the delay discounting model. It is just meant to be a way of organising simple labelling of incoming data, and dispatching that out (group wise) for analysis, then export.
modelPrefs = {'savePath', fullfile(save_path,'model_output'),...
'pointEstimateType','median',...
'sampler', 'jags',...
'shouldPlot', 'no',...
'mcmcParams', struct('nsamples', 10^5,...
'chains', 4,...
'nburnin', 2*10^3)};
results = analyseGroups('dataLocation', data_path,...
'model', 'ModelHierachicalLogK',...
'modelPrefs', modelPrefs,...
'filenameParser', @myFilenameParser,...
'groupBy', {'condition', 'participant'},...
'saveLocation', save_path)
If we set groupBy = {'condition', 'participant'}
then it should group by unique combinations of both condition and participant. So if we are using a fully hierarchical model, then we should get the 'shrinkage' effect within groups, avoiding the problematic approach of modelling separate groups just as one homogenous group. This is not a 'full' solution; ideally we would want a fill GLM approach where we can have participant and group means etc.
This is being driven from analysis of the 'hunger dataset'.