Giter VIP home page Giter VIP logo

embraceuncertainty's People

Contributors

dmbates avatar kliegl avatar palday avatar rikhuijzer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

sbalci

embraceuncertainty's Issues

Shall we bump the jupyter kernel spec to julia-1.10?

Now that the release version of Julia is 1.10.0 I don't have a julia-1.9 IJulia kernel accessible through juliaup without adding it explicitly. @palday Would it be okay to bump the jupyter setting in the files in this repository to julia-1.10?

BLUPs / conditional modes discussion

"These values are often considered as some sort of “estimates” of the random effects. It can be helpful to think of them this way but it can also be misleading. As we have stated, the random effects are not, strictly speaking, parameters — they are unobserved random variables. We don’t estimate the random effects in the same sense that we estimate parameters."

It could be worth pointing out that in Bayesian LMMs these actually have the status of parameters.

Fig 1.3

Are these bootstrapped samples of $\beta$ or $\hat \beta$? Also, the y-axis should be labeled "density", not pdf?

Comment/question on degeneracy

"To reiterate, this model can be reduced to a linear model because the random effects are inert, in the sense that they have a variance of zero." Is this sentence correct? The random effects don't necessarily have a variance of zero, the estimate of the variance from the data is zero.

Also, regarding the model being called degenerate, is it the model that is degenerate or the variance covariance matrix? In Pinheiro and Bates I think degenerate was only used in the context of the vcov matrix of random effects.

Finally, it may be worth mentioning that the 0 estimate of the random effect sd is just due to sparse data. If we had lots more data, the estimate would probably be non-zero. Simulation can demonstrate this point.

PNG, SVG and Quarto

@dmbates commented over in #18

I notice that figures are available in both .svg and .png formats. If would be good if one of these could be suppressed. Occasionally it is desirable to have .png, when e.g. the number of points to plot is very large, but generally .svg looks better.

Pandoc crossref

The prefixes are described here.

But by default pandoc treats the top level as a section and not a chapter, as discussed in the caveats:

Because pandoc-crossref offloads all numbering to LaTeX if it can, chapters: true has no direct effect on LaTeX output. You have to specify Pandoc's --top-level-division=chapter option, which should hopefully configure LaTeX appropriately.

Extend dataset method to incorporate MixedModels datasets

I propose that we have all the datasets that we use available through the EmbraceUncertainty.dataset generic and remove the data directory.

For the purposes of the book dataset(:foo) will mean EmbraceUncertainty.dataset(:foo).

Those datasets that are available through MixedModels.dataset will be grandfathered into EmbraceUncertainty.dataset by checking the names against the result of MixedModels.datasets().

Data tables from the English Lexicon Project are available in the arrow directory of https://github.com/dmbates/EnglishLexicon.jl

The datasets used in the longitudinal data chapter are available in the osf.io repository https://osf.io/sr46c/.

For the time being I think I will just customize the dataset retrieval for each name just so we can get it working. Later we can decide if there should be a unified storage and retrieval protocol.

Take advantage of recent changes in MixedModels.jl

Currently we assign Grouping() contrasts to any grouping factors. In the past it was necessary to do this to avoid unnecessarily creating large, dense contrasts matrices when grouping factors had many levels.

After JuliaStats/MixedModels.jl@9ee4a0a we no longer need to so (thanks to @palday).

Shall we remove the explicit assignments of Grouping() contrasts? This will allow us to avoid creating a contrasts Dict until late in the second chapter when we apply a +/- 1 encoding to a binary experimental factor.

Also, the show_progress named argument to parametricbootstrap has been deprecated in favor of progress. It makes sense now to assign

const progress = false

and add a named argument progress to calls to fit and parametricbootstrap.

I will create a PR for these changes if there is no objection.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.