Giter VIP home page Giter VIP logo

lavaanextra's Introduction

lavaanExtra: Convenience functions for lavaan

R-CMD-check CRAN status r-universe Last-commit Codecov test coverage lifecycle downloads total DOI sponsors followers stars

Affords an alternative, vector-based syntax to lavaan, as well as other convenience functions such as naming paths and defining indirect links automatically. Also offers convenience formatting optimized for a publication and script sharing workflow.

Installation

You can install the lavaanExtra package directly from CRAN:

install.packages("lavaanExtra")

Or the development version from the r-universe (note that there is a 24-hour delay with GitHub):

install.packages("lavaanExtra", repos = c(
  rempsyc = "https://rempsyc.r-universe.dev",
  CRAN = "https://cloud.r-project.org"))

Or from GitHub, for the very latest version:

# If not already installed, install package `remotes` with `install.packages("remotes")`
remotes::install_github("rempsyc/lavaanExtra")

To see all the available functions, use:

help(package = "lavaanExtra")

Dependencies: Because lavaanExtra is a package of convenience functions relying on several external packages, it uses (inspired by the easystats packages) a minimalist philosophy of only installing packages that you need when you need them through rlang::check_installed(). Should you wish to specifically install all suggested dependencies at once (you can view the full list by clicking on the CRAN badge on this page), you can run the following (be warned that this may take a long time, as some of the suggested packages are only used in the vignettes or examples):

install.packages("lavaanExtra", dependencies = TRUE)

Why use lavaanExtra?

  1. Reusable code. Don’t repeat yourself anymore when you only want to change a few things when comparing and fitting models.
  2. Shorter code. Because of point 1, you can have shorter code, since you write it once and simply reuse it. For items with similar patterns, you can also use paste0() with appropriate item numbers instead of typing each one every time.
  3. Less error-prone code. Because of point 1, you can have less risk of human errors since you don’t have possibly multiple different version of the same thing (which makes it easier to correct too).
  4. Better control over your code. Because of point 1, you are in control of the whole flow. You change it once, and it will change it everywhere else in the script, without having to change it manually for each model.
  5. More readable code. Because of point 1, other people (but also yourself) only have to process the information the first time to make sure it’s been specified correctly, and not every time you check the new models.
  6. Prettier code. Because it will format the model for you in a pretty way, every time. You don’t have to worry about manually making your model good-looking and readable anymore.
  7. More accessible code. You don’t have to remember the exact syntax (although it is recommended) for it to work. It uses intuitive variable names that most people can understand. This benefit is most apparent for beginners, but it also saves precious typing time for veterans.

Overview

CFA example

SEM example

Final note

CFA example

# Load libraries
library(lavaan)
library(lavaanExtra)

# Define latent variables
latent <- list(
  visual = c("x1", "x2", "x3"),
  textual = c("x4", "x5", "x6"),
  speed = c("x7", "x8", "x9")
)

# If you have many items, you can also use the `paste0` function:
x <- paste0("x", 1:9)
latent <- list(
  visual = x[1:3],
  textual = x[4:6],
  speed = x[7:9]
)

# Write the model, and check it
cfa.model <- write_lavaan(latent = latent)
cat(cfa.model)
#> ##################################################
#> # [-----Latent variables (measurement model)-----]
#> 
#> visual =~ x1 + x2 + x3
#> textual =~ x4 + x5 + x6
#> speed =~ x7 + x8 + x9
# Fit the model fit and plot with `lavaanExtra::cfa_fit_plot`
# to get the factor loadings visually (optionally as PDF)
fit.cfa <- cfa_fit_plot(cfa.model, HolzingerSwineford1939)
#> lavaan 0.6-18 ended normally after 35 iterations
#> 
#>   Estimator                                         ML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        21
#> 
#>   Number of observations                           301
#> 
#> Model Test User Model:
#>                                               Standard      Scaled
#>   Test Statistic                                85.306      87.132
#>   Degrees of freedom                                24          24
#>   P-value (Chi-square)                           0.000       0.000
#>   Scaling correction factor                                  0.979
#>     Yuan-Bentler correction (Mplus variant)                       
#> 
#> Model Test Baseline Model:
#> 
#>   Test statistic                               918.852     880.082
#>   Degrees of freedom                                36          36
#>   P-value                                        0.000       0.000
#>   Scaling correction factor                                  1.044
#> 
#> User Model versus Baseline Model:
#> 
#>   Comparative Fit Index (CFI)                    0.931       0.925
#>   Tucker-Lewis Index (TLI)                       0.896       0.888
#>                                                                   
#>   Robust Comparative Fit Index (CFI)                         0.930
#>   Robust Tucker-Lewis Index (TLI)                            0.895
#> 
#> Loglikelihood and Information Criteria:
#> 
#>   Loglikelihood user model (H0)              -3737.745   -3737.745
#>   Scaling correction factor                                  1.133
#>       for the MLR correction                                      
#>   Loglikelihood unrestricted model (H1)      -3695.092   -3695.092
#>   Scaling correction factor                                  1.051
#>       for the MLR correction                                      
#>                                                                   
#>   Akaike (AIC)                                7517.490    7517.490
#>   Bayesian (BIC)                              7595.339    7595.339
#>   Sample-size adjusted Bayesian (SABIC)       7528.739    7528.739
#> 
#> Root Mean Square Error of Approximation:
#> 
#>   RMSEA                                          0.092       0.093
#>   90 Percent confidence interval - lower         0.071       0.073
#>   90 Percent confidence interval - upper         0.114       0.115
#>   P-value H_0: RMSEA <= 0.050                    0.001       0.001
#>   P-value H_0: RMSEA >= 0.080                    0.840       0.862
#>                                                                   
#>   Robust RMSEA                                               0.092
#>   90 Percent confidence interval - lower                     0.072
#>   90 Percent confidence interval - upper                     0.114
#>   P-value H_0: Robust RMSEA <= 0.050                         0.001
#>   P-value H_0: Robust RMSEA >= 0.080                         0.849
#> 
#> Standardized Root Mean Square Residual:
#> 
#>   SRMR                                           0.065       0.065
#> 
#> Parameter Estimates:
#> 
#>   Standard errors                             Sandwich
#>   Information bread                           Observed
#>   Observed information based on                Hessian
#> 
#> Latent Variables:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>   visual =~                                                             
#>     x1                1.000                               0.900    0.772
#>     x2                0.554    0.132    4.191    0.000    0.498    0.424
#>     x3                0.729    0.141    5.170    0.000    0.656    0.581
#>   textual =~                                                            
#>     x4                1.000                               0.990    0.852
#>     x5                1.113    0.066   16.946    0.000    1.102    0.855
#>     x6                0.926    0.061   15.089    0.000    0.917    0.838
#>   speed =~                                                              
#>     x7                1.000                               0.619    0.570
#>     x8                1.180    0.130    9.046    0.000    0.731    0.723
#>     x9                1.082    0.266    4.060    0.000    0.670    0.665
#> 
#> Covariances:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>   visual ~~                                                             
#>     textual           0.408    0.099    4.110    0.000    0.459    0.459
#>     speed             0.262    0.060    4.366    0.000    0.471    0.471
#>   textual ~~                                                            
#>     speed             0.173    0.056    3.081    0.002    0.283    0.283
#> 
#> Variances:
#>                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
#>    .x1                0.549    0.156    3.509    0.000    0.549    0.404
#>    .x2                1.134    0.112   10.135    0.000    1.134    0.821
#>    .x3                0.844    0.100    8.419    0.000    0.844    0.662
#>    .x4                0.371    0.050    7.382    0.000    0.371    0.275
#>    .x5                0.446    0.057    7.870    0.000    0.446    0.269
#>    .x6                0.356    0.047    7.658    0.000    0.356    0.298
#>    .x7                0.799    0.097    8.222    0.000    0.799    0.676
#>    .x8                0.488    0.120    4.080    0.000    0.488    0.477
#>    .x9                0.566    0.119    4.768    0.000    0.566    0.558
#>     visual            0.809    0.180    4.486    0.000    1.000    1.000
#>     textual           0.979    0.121    8.075    0.000    1.000    1.000
#>     speed             0.384    0.107    3.596    0.000    1.000    1.000
#> 
#> R-Square:
#>                    Estimate
#>     x1                0.596
#>     x2                0.179
#>     x3                0.338
#>     x4                0.725
#>     x5                0.731
#>     x6                0.702
#>     x7                0.324
#>     x8                0.523
#>     x9                0.442

# Get nice fit indices with the `rempsyc::nice_table` integration
nice_fit(fit.cfa, nice_table = TRUE)

SEM example

Note that latent variables have been defined above, so we can reuse them as is, without having to redefine them.

# Define our other variables
M <- "visual"
IV <- c("ageyr", "grade")
DV <- c("speed", "textual")

# Define our lavaan lists
mediation <- list(speed = M, textual = M, visual = IV)
regression <- list(speed = IV, textual = IV)
covariance <- list(speed = "textual", ageyr = "grade")

# Define indirect effects object
indirect <- list(IV = IV, M = M, DV = DV)

# Write the model, and check it
model <- write_lavaan(
  mediation = mediation,
  regression = regression,
  covariance = covariance,
  indirect = indirect,
  latent = latent,
  label = TRUE
)
cat(model)
#> ##################################################
#> # [-----Latent variables (measurement model)-----]
#> 
#> visual =~ x1 + x2 + x3
#> textual =~ x4 + x5 + x6
#> speed =~ x7 + x8 + x9
#> 
#> ##################################################
#> # [-----------Mediations (named paths)-----------]
#> 
#> speed ~ visual_speed*visual
#> textual ~ visual_textual*visual
#> visual ~ ageyr_visual*ageyr + grade_visual*grade
#> 
#> ##################################################
#> # [---------Regressions (Direct effects)---------]
#> 
#> speed ~ ageyr + grade
#> textual ~ ageyr + grade
#> 
#> ##################################################
#> # [------------------Covariances-----------------]
#> 
#> speed ~~ textual
#> ageyr ~~ grade
#> 
#> ##################################################
#> # [--------Mediations (indirect effects)---------]
#> 
#> ageyr_visual_speed := ageyr_visual * visual_speed
#> ageyr_visual_textual := ageyr_visual * visual_textual
#> grade_visual_speed := grade_visual * visual_speed
#> grade_visual_textual := grade_visual * visual_textual
fit.sem <- sem(model, data = HolzingerSwineford1939)
# Get regression parameters and make pretty with `rempsyc::nice_table`
lavaan_reg(fit.sem, nice_table = TRUE, highlight = TRUE)

# Get covariances/correlations and make them pretty with 
# the `rempsyc::nice_table` integration
lavaan_cor(fit.sem, nice_table = TRUE)

# Get nice fit indices with the `rempsyc::nice_table` integration
fit_table <- nice_fit(list(fit.cfa, fit.sem), nice_table = TRUE)
fit_table

# Save fit table to Word!
flextable::save_as_docx(fit_table, path = "fit_table.docx")
# Note that it will also render to PDF in an `rmarkdown` document
# with `output: pdf_document`, but using `latex_engine: xelatex`
# is necessary when including Unicode symbols in tables like with
# the `nice_fit()` function.

# Let's get the user-defined (e.g., indirect) effects only and make it pretty
# with the `rempsyc::nice_table` integration
lavaan_defined(fit.sem, nice_table = TRUE)

# Plot our model
nice_lavaanPlot(fit.sem)

# Alternative way to plot
mylayout <- data.frame(
  IV = c("", "x1", "grade", "", "ageyr", "", ""),
  M = c("", "x2", "", "visual", "", "", ""),
  DV = c("", "x3", "textual", "", "speed", "", ""),
  DV.items = c(paste0("x", 4:6), "", paste0("x", 7:9))
) |>
  as.matrix()
mylayout
#>      IV      M        DV        DV.items
#> [1,] ""      ""       ""        "x4"    
#> [2,] "x1"    "x2"     "x3"      "x5"    
#> [3,] "grade" ""       "textual" "x6"    
#> [4,] ""      "visual" ""        ""      
#> [5,] "ageyr" ""       "speed"   "x7"    
#> [6,] ""      ""       ""        "x8"    
#> [7,] ""      ""       ""        "x9"
nice_tidySEM(fit.sem, layout = mylayout, label_location = 0.7)

ggplot2::ggsave("my_semPlot.pdf", width = 6, height = 6, limitsize = FALSE)

Final note

This is an experimental package in a very early stage. Any feedback or feature request is appreciated, and the package will likely change and evolve over time based on community feedback. Feel free to open an issue or discussion to share your questions or concerns. And of course, please have a look at the other tutorials to discover even more cool features: https://lavaanExtra.remi-theriault.com/articles/

Support me and this package

Thank you for your support. You can support me and this package here: https://github.com/sponsors/rempsyc

lavaanextra's People

Contributors

buedenbender avatar ethanbass avatar jamesuanhoro avatar rempsyc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

lavaanextra's Issues

nonsequitur in workflow tutorial

For review: openjournals/joss-reviews#5701

In https://lavaanextra.remi-theriault.com/articles/fullworkflow.html
the CFA example ends by removing items with the lowest loadings in order to improve fit. This suggestion is provided without any justification or citation that it is a good idea. Unreliable items can still be included in a model that accurately depicts their relationships with common factors (including method factors that might explain correlated residuals). Because this package is apparently targeted to "beginners" (in the Statement of Need), I would recommend excluding any example of a workflow that does not demonstrate best practices.

name arguments in write_lavaan() examples

For review: openjournals/joss-reviews#5701

The order of arguments in write_lavaan() is rather arbitrary. That is not a bad thing (I see no reason why any particular order would be favored). But the vignettes sometimes include examples where the model components are passed without naming the arguments:
https://lavaanextra.remi-theriault.com/articles/fullworkflow.html#sem-example

This might give readers the false impression that they can pass them in any order.

lavaan_ind() is actually more general

For review: openjournals/joss-reviews#5701

lavaan_ind() only selects rows with $op == ":=", which can be any user-defined parameters, not merely indirect effects. This could have strange consequences for labeling when indirect effects aren't the only type of user-defined parameter in a model (e.g., when slopes differ across groups):

fit <- sem(' x1 ~ c(a, b)*x2 ', data = HolzingerSwineford1939,
           group = "school", constraints = 'moderated_slope := a - b')
lavaan_ind(fit)

It would be wise to either (a) include a more thorough check for selecting only the rows of parameters that are indirect effects, or (b) to rename the function to reflect it being more general. There could be an argument to request any specific character (your example can use your arrow: "\u2192") to replace underscores, or even accept a vector of replacements if they are not all the same.

nice_fit() error when estimating lavaan via tidySEM

the function nice_fit() works fine when estimating directly with lavaan a cfa() or sem().
However, while using tidySEM to estimate_lavaan(), which produce a lavaan object, nice_fit() return this error.

Error in vapply(match.call(expand.dots = FALSE)$..., as.character, FUN.VALUE = "character") : 
  values must be length 1,
 but FUN(X[[1]]) result is length 3

I wonder where the issue is, because I was able to reproduce the lavaan fit object with pure-lavaan and tidySEM.

clarify target audience

For review: openjournals/joss-reviews#5701

Regarding the paper:

I don't think the Statement of Need "clearly identifies ... who the target audience is", which is on the checklist. It refers to "beginners" and "advanced users", without clarifying whether these are beginners or advanced users of SEM, of R, or of lavaan in particular. After going through all of the examples, I can't find a reason to recommend it to my students who are learning SEM. The only reason I would expect to use it myself (or recommend it to colleagues) would be for the nice looking tables, since the figures are simply wrappers around functions already available in other packages.

write_lavaan(mediation=) is unnecessary

For review: openjournals/joss-reviews#5701

write_lavaan() internally handles the mediation= and regression= arguments identically, except that the label= argument is additionally passed to process_vars() for mediation= components.

    if (!is.null(mediation)) {
        mediation <- process_vars(mediation, symbol = "~", label = label, 
            title = "[-----------Mediations (named paths)-----------]")
    }
    if (!is.null(regression)) {
        regression <- process_vars(regression, symbol = "~", 
            title = "[---------Regressions (Direct effects)---------]")
    }

I assume the default is process_vars() is label = FALSE, but that function does not appear to be from this package (nor did a ?? query return anything).
This is perfectly consistent with the help-page description, but I had to inspect the source code to understand why it was necessary. It does not appear to be necessary, since one could simply pass the same information from mediation to the regression argument, if the source code merely called regression <- process_vars(regression, symbol = "~", label = label, ...) once. The examples about mediation already show users that the label= argument is necessary to define indirect effects.

I can't imagine, but is there some special scenario that includes regression paths in an SEM that are not part of indirect effects of interest? Maybe to control for covariates, those effects are not of interest? But those can always be ignored in a table of results. To exclude them from indirect effects, the label= argument could be a character vector of variable names to include (implying TRUE when not empty or NULL).

other output formats for tables

For review: openjournals/joss-reviews#5701

I love the table formats! I would like to have them accessible for my LaTeX documents, since I no longer use Word (for the same DRY and modularity principles your paper mentions). The semTable package provides only LaTeX and HTML output (or saved as a CSV file) for tables of lavaan output. The knitr::kable() function does provide LaTeX or Markdown formats (the latter can also be rendered as Word), but I have to specify all the table cells myself (rather than your nice row/column headers).

So this is a feature request, perhaps not doable in the time it takes for a review. It looks like the flextable package does not output LaTeX, but does output Markdown. There might be an easy way to at least make that available. RMarkdown could output a LaTeX file to capture it from there...

lavaan_reg() defaults

For review: openjournals/joss-reviews#5701

lavaan_reg() only allows for reporting unstandardized or standardized coefficients, the latter being reported with NHSTs based on delta-method SEs. This does not conform to the recommended practice of reporting unstandardized coefficients with their NHSTs (SEs, CIs, Wald z tests; McDonald & Ho, 2002, p. 76).

APA recommends reporting a standardized measure of effect size with any NHST, and the standardized coefficient satisfies this without any need to report a redundant NHST for the standardized coefficient (which typically tests the same null hypothesis using different point and SE estimates). A more appropriate default table would reporting the parameterEstimates() output with the std.all column only used as an effect size (which is why it is reported that way in lavaan's summary() method).

However, Kelley & Preacher (2012) note that standardized estimates might also estimate a specific parameter of interest. They have sampling distributions, so reporting CIs for standardized coefficients is a good idea for the same reason it is good to provide them for unstandardized estimates.

One idea might be to have a test= argument in addition to estimate=, to indicate which NHST (and CI) to include. The default might be lavaan_reg(..., estimate=c("b","B"), test = "b") to request both estimates but only test the unstandardized. But it could get complicated figuring out ideal layouts for the 9 different combinations of either/or/both for estimate= and test=.

At which point the question becomes whether to keep the function simple (expecting it to satisfy all situations) or make it more flexible with options/arguments that give users enough control to publish according to their preferences rather than a package designer's.

claims about modular syntax approach

For review: openjournals/joss-reviews#5701

Regarding the paper:

The modular syntax is a cool idea, and I have used the same approach (applying paste() to lists) when coding simulations. But it does not do what the paper claims on p. 1 (lines 35-36):

the user only needs to change it once, at the appropriate location, and it will update future occurrences automatically since it relies on reusable components

This makes it sound like the model is a dynamically changing object, but the vignettes demonstrate that the model needs to be written again with different arguments (e.g., defining a new regression object here before running write_lavaan() again). The write_lavaan() function's main purpose is to paste() a lavaan operator between a list's names and its contents, with some . So users have to remember the specific arguments to pass different lists to, rather than remembering the short list of operators listed on the ?lavaan::model.syntax help page, the JSS paper, the website's tutorials, etc. The examples make it pretty clear that every other aspect of model syntax needs to be written out (however long the right-hand side needs to be), except for the operator. So there is still plenty of opportunity for making typos in those lists.

Reusing script components (after typos are fixed) is indeed good practice, but modular syntax does not require using paste() the way write_lavaan() does. The advantage of paste() for quickly specifying a list of indicators (lines 37-38) is also not unique to lavaanExtra's vector-based approach. This has always been possible:

mod <- paste0("g =~ x", 1:9)
fit <- cfa(mod, data = HolzingerSwineford1939)

When teaching SEM to beginners, I sometimes use a modular approach by storing each model component as a single character string (which are more legible than the lists passed to write_lavaan) and concatenate the different components when fitting a model to data. For example, measurement and structural components can be stored in separate strings (e.g., mod.meas <- "..." and mod.reg <- "..."), then calling the "full SEM" with sem(c(mod.meas, mod.reg), data = ...). Or when I teach about invariance, I will specify in separate strings:

  1. the labeled loadings (sufficient for configural invariance)
  2. equality constraints on those labels (concatentated for metric invariance)
  3. equality-constrained intercepts (additionally concatenated for scalar invariance)
  4. equality-constrained residual variances (additionally concatenated for strict invariance)

These are all "reusable components" (like 36) that obey the DRY principle (line 29). The write_lavaan() function isn't really about modularity. It is about not writing lavaan syntax yourself, and instead specifying lists of righthand sides, named by lefthand sides. I think that is a matter of taste, and others may find it as useful as you do. But I don't see how there is any room to claim that write_lavaan() is necessary to achieve the advantages of modularity listed on the first couple pages. The main advantage of write_lavaan() that I can identify is the automated specification of indirect effects, at least in the case of simple and parallel (but not serial) mediation models.

Feature: nice_fit; write χ² over chi2

Hi Rémi,

I did not watch in the code, but I just recognized in experimenting with your package that you write
in the heading of nice_table (inside nice_fit) "chi2" rather than χ² .

grafik

As I know this can be a pain with R-Cmd-Check failing with non ASCII characters from my
flex_table1 function I might can help.

Try using the following string instead of Chi2
"\u03C7\u00B2"

Should result in χ²

Investigate x.boot extension for indirect effects

Christian wrote, about lavaanExtra,

I like the approach. Thanks for putting so much work into it! About the indirect effects: That's great and will probably reduce specification errors. I posted a while ago an extension of x.boot that (hopefully) automatically identifies and calculates all indirect effects: https://groups.google.com/g/lavaan/c/0RSsh4M6zQg/m/V2zKKV7FAwAJ A bit like Amos, but not limited to x->m->y. The extension is computationally intensive and not efficient. i haven't had the time or inclination to program a better solution yet. But maybe the two approaches can be combined? I don't know what that would look like exactly. It's just an idea.

  • Assess if and how that feature should be integrated.

cfa_fit_plot does fail on DWLS Estimator

Hey, regarding our last correspondence.
I create an example dataset, which runs fine with DWLS (and other Estimators) with the traditional lavaan see below

# 1) Load Packages & Settings ---------------------------------------------
# install.packages("lavaanExtra", repos = c(
#   rempsyc = "https://rempsyc.r-universe.dev",
#   CRAN = "https://cloud.r-project.org"))
# remotes::install_github("Buedenbender/datscience")

rm(list=ls())
pacman::p_load(lavaanExtra,tidyverse, lavaan,skimr,datscience)

# Settings
set.seed(21)
N <-300 # sim. participants
k <- 10 # sim. variables

# 2) Simulate Likert-type Response format data ----------------------------
df <- tibble(ID = c(1:N))

# Create data for factor 1
for (i in c(1:round(k*0.7,0))) {
  df[paste0("item",i)] <- sample(1:5,N,replace=T, prob = c(.17,.5,.18,.1,.05)) 
}

# Create data for factor 2
for (i in c((round(k*0.7,0)+1):k)) {
  df[paste0("item",i)] <- sample(1:5,N,replace=T, prob = c(.0,.15,.25,.35,.25)) 
}

skim(df)
corstars(df)

# 3) Problem  -------------------------------------------------------------
# Create the model string
twofactor_model <- list(factor1 = paste0("item",c(1:6)),
                        factor2 = paste0("item",c(7:10)))

twofactor_model_lavaan <- write_lavaan(latent = twofactor_model)

# Fitting models with different estimators
for(esti in c("MLR","ML","DWLS","ULS","WLSM","WLSMV")){
  tmp_fit <- cfa(twofactor_model_lavaan,
                 data = df,
                 estimator = esti
  )
  assign(paste0("fit_",esti), value = tmp_fit)
}
rm(tmp_fit)

nice_fit(fit_ML,fit_MLR,fit_DWLS,fit_WLSM,fit_WLSMV,fit_ULS, nice_table = T)

Which works fine:
grafik

Problem

However, the same loop over the estimators fails for cfa_fit_plot. As in our previous conversation, this example showcases why it is preferable not to default to PDF output in this case. Maybe it be worth to pickup an optional argument for the function to make at chooseable

# Remove all lavaan models
rm(list=grep("^fit_",ls(),value=T))
# Create the cfa models with cfa_fit_plot()
for(esti in c("MLR","ML","DWLS","ULS","WLSM","WLSMV")){
  tmp_fit <- cfa_fit_plot(twofactor_model_lavaan,
                 data = df,
                 estimator = esti
  )
  assign(paste0("lavvanExtra_fit_",esti), value = tmp_fit)
}

The loop cancels when it reaches DWLS (i.e., in my environment are three fit objects created with lavaanExtra)
grafik

It might be however due to the ULS estimator that it breaks, here is the error from the console

Error in lav_options_set(opt) : 
  lavaan ERROR: missing="ml" is not allowed for estimator “uls”
In addition: Warning messages:
1: In lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats,  :
  lavaan WARNING:
    The variance-covariance matrix of the estimated parameters (vcov)
    does not appear to be positive definite! The smallest eigenvalue
    (= -6.194973e-06) is smaller than zero. This may be a symptom that
    the model is not identified.
2: In lav_object_post_check(object) :
  lavaan WARNING: some estimated ov variances are negative
3: In lav_object_post_check(object) :
  lavaan WARNING: some estimated ov variances are negative
4: In lav_options_set(opt) :
  lavaan WARNING: information will be set to “expected” for estimator = “DWLS”
5: In lavaan::lavaan(model = model, data = data, estimator = estimator,  :
  lavaan WARNING:
    Model estimation FAILED! Returning starting values.
6: In lavaan::lavaan(model = model, data = data, estimator = estimator,  :
  lavaan WARNING: estimation of the baseline model failed.
7: In .local(object, ...) :
  lavaan WARNING: fit measures not available if model did not converge

To Dos:

  • Make the output format (PDF, RStudio Device, ..) chooseable per argument / flag
  • Fix the errors for other estimators than ML, MLR

nice_fit() index cutoffs

For review: openjournals/joss-reviews#5701

When nice_fit(..., nice_table = TRUE), there is a bottom row of "ideal values", with a table footnote indicating they were "proposed by Schreiber (2017)". This is inaccurate in 2 ways:

  1. Schreiber (2017) did not propose any cutoffs, but rather cited the source of fit-index cutoffs as Hu & Bentler (1999), who actually suggested RMSEA < .06, whereas your table suggests both < .05 and a range extending to .08, which originate from different sources:
    • Browne & Cudeck (1992) proposed RMSEA < .05 indicates close fit, RMSEA < .08 indicates reasonable fit, and RMSEA > .10 indices poor fit
    • MacCallum, Browne, & Sugawara (1996) proposed RMSEA between .08–.10 indicates mediocre fit
  2. Schreiber (2017, p. 640) explicitly did not advocate the cutoff values in the table:

Many researchers grew up with the combination suggestion of SRMR less than or equal to .08 and CFI greater than or equal to .95 (Hu & Bentler, 1999). This has not worked out in simulations (Sivo et al., 2006).

If anything, Schreiber's very next paragraph "proposed" using SRMR < .10 as a cutoff.

I do not recommend including cutoffs in the table, as doing so would perpetuate their misuse. Fit indices are not test statistics, and their suggested cutoffs are not critical values associated with known Type I error rates. Numerous simulation studies have shown how poorly cutoffs perform in model selection, including one of my own. Instead of test statistics, fit indices were designed to be measures of effect size (practical significance), which complement the chi-squared test of statistical significance. The range of RMSEA interpretations above is more reminiscent of the range of small/medium/large effect sizes proposed by Cohen for use in power analyses, which are as arbitrary as alpha levels, but at least they better respect the idea that (mis)fit is a matter of magnitude, not nearly so simple as "perfect or imperfect."

If you insist on including cutoffs in the table, I recommend:

  • including multiple suggestions (e.g., in multiple rows), with multiple subscripts that accurately link each suggestion to its original proposal. For example, Bentler & Bonett (1980) proposed incremental fit indices > .90 were acceptable.
  • calling them "suggested cutoffs" instead of "ideal values".

Indirect and Total Effect calculations

Automated calculation of indirect and total effects would be very helpful in the time series and panel data cases. For example, with NLSY data in the years 1997 to 2002 for cigarette use and drinking behavior where the future depends on the past and effects 'propagate' over time along AR and CL paths in a cross-lagged panel model -- you can see that the total effects from the past to the future quickly blow up, for example, try to manually code an effects from 1997 to 2002 along all AR and CL path combinations required (I've only coded the 1997 --> 1999 effects as examples here, and you can download the data here):

library(lavaan)
nlsy<-read.csv("NLSY.csv", header=TRUE, na.strings=c("",".","NA","-999"))

ourModel <- '
  # AR terms
  cig02 ~ ar1_c*cig01
  cig01 ~ ar1_c*cig00
  cig00 ~ ar1_c*cig99
  cig99 ~ ar1_c*cig98
  cig98 ~ ar1_c*cig97
  drink02 ~ ar1_d*drink01
  drink01 ~ ar1_d*drink00
  drink00 ~ ar1_d*drink99
  drink99 ~ ar1_d*drink98
  drink98 ~ ar1_d*drink97
  
  # CL effect
  drink02 ~ cl1_dc*cig01
  drink01 ~ cl1_dc*cig00
  drink00 ~ cl1_dc*cig99
  drink99 ~ cl1_dc*cig98
  drink98 ~ cl1_dc*cig97
  cig02 ~ cl1_cd*drink01
  cig01 ~ cl1_cd*drink00
  cig00 ~ cl1_cd*drink99
  cig99 ~ cl1_cd*drink98
  cig98 ~ cl1_cd*drink97
  
  # Impulse responses
  c97.c98 := ar1_c
  c97.c99 := ar1_c^2 + cl1_dc*cl1_cd
  d97.d98 := ar1_d
  d97.d99 := ar1_d^2 + cl1_cd*cl1_dc
  c97.d98 := cl1_dc
  c97.d99 := cl1_dc*ar1_d + ar1_c*cl1_dc
  d97.c98 := cl1_cd
  d97.c99 := cl1_cd*ar1_c + ar1_d*cl1_cd
'
fit <- sem(ourModel, data = nlsy, mimic="Mplus", estimator="MLR")
summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE)

library(semTools)
monteCarloCI(fit, nRep = 10000, fast = TRUE, level = .95, plot = TRUE)

link to issues in vignette

For review: openjournals/joss-reviews#5701

At the bottom of this vignette, you state

If you experience any issues with other scenarios, please open a GitHub issue with your example

Because few users will be familiar with how GitHub works (or the fact that the link to the repository is the little symbol at the top-right of the page), it would be helpful to make that text a link:

please open a [GitHub issue](https://github.com/rempsyc/lavaanExtra/issues) with your example

diagonal elements in lavaan_cov() and lavaan_cor()

For review: openjournals/joss-reviews#5701

The separate lavaan_cov() and lavaan_cor() functions seem inconsistent with a single lavaan_reg() function with the estimate= argument to select (un)standardized coefficients. In fact, the output of lavaan_cor() is redundant with any covariances printed by lavaan_cov(..., estimate = "B"). It might be more efficient to make the following lines from lavaan_cor() conditional on an argument like diag=FALSE, so the variances are ignored if(!diag).

not.cor <- which(x$lhs == x$rhs) # Why call this "not.cor" when it means the opposite?
x <- x[-not.cor, ]               # This looks like "not not.cor" when you mean "not cor"

Of course, the complication is what symbols to use. It makes sense for lavaan_cor() to use r for correlations, but that is also what some of the standardized values from lavaan_cov() are (yet they are rather unfortunately labeled "b", despite not being regression slopes). If you wanted to segment out the variances, you could

  • reserve lavaan_cov() to be only for off-diagonal elements, and it returns what lavaan_cor() does when estimate = "B" (or perhaps more appropriately for this functionality, when estimate = "r", and estimate = "sigma" could indicate unstandardized covariances, triggering rempsyc::nice_table() to use a \sigma column header).
  • The diagonal elements could be returned by a separate function (e.g., lavaan_var()), which could be (unstandardized) variances when estimate = "sigma" (with a \sigma^2 column header when nice_table=TRUE), but setting estimate = "r2" or "rsq" could be the flag for standardized values (which are the proportion of total variance, so 1 minus R^2). Or even better, you could calculate 1 minus the standardized variances to actually return R^2 (using that as the table header when nice_table=TRUE).
    • Granted, you can extract rows with op == "r2" from parameterEstimates(), but they don't have p values. The test for variances from standardizedSolution() is for the null hypothesis that the residual variance = 0, which would imply the null hypothesis that R^2 = 1. That is probably not very useful. Instead, it would be trivial for you to calculate a new p values to test whether the standardized variance = 1, which would imply R^2 = 0 (probably a more sensible default test). That only requires calculating a new Wald z statistic as (est - 1)/SE and using pnorm() to get a one-tailed p value (since it can only go in one direction).

replace "special cases" with a write_lavaan() vignette

For review: openjournals/joss-reviews#5701

The Special Cases vignette is not very well motivated, and uses only one example, which makes the floating menu on the right seem unnecessary.

You might consider making a write_lavaan.Rmd vignette that describes each write_lavaan() argument in a separate section, then simply link to examples of its use in other vignettes/examples. For example:

  • Example 4.6 already illustrates (in)equality constraints
  • Example 5 illustrates user-defined parameters
  • The ML-SEM example illustrates the necessity of write_lavaan(..., custom=)
  • You could reproduce the examples from lavaan's mean-structures tutorial in the examples.Rmd vignette, which illustrate the use of write_lavaan(..., intercepts=), and then link to that in the write_lavaan.Rmd vignette
  • There is no write_lavaan(..., thresholds=) argument, so currently specifying those would require the custom= argument as well. This would be a good place to show users that.

This would then be a more exhaustive documentation for the write_lavaan() function, which you can link to in See also on the write_lavaan.Rd help page.

?lavaanExtra in README doesn't work

For review: openjournals/joss-reviews#5701

In the GitHub repository's instructions to access the package's full list of functions, the following R syntax is given:

library(lavaanExtra)
?lavaanExtra

But the /man directory does not have a lavaanExtra.Rd file (nor does there appear to be an alias pointing to it). I'm not sure if @rempsyc intended to have a documentation page for the package as a whole; if not, then the README could instead direct users with this syntax:

help(package = "lavaanExtra")

Custom labels for `nice_fit()`

Hi Rémi,

this package has really grown on me and I think it has some nice unique benefits.
A small suggestion for an improvement, I would like to have the ability to supply custom
names for the nice_fit() table.

Currently I am using this workaround (which is really not a proper way to name your objects / variables like this 🤔):

# Fitting th models ------------------------------------------------------
`CFA 1 Factor` <- cfa(model = model1, data = dataset, estimator = "DWLS")
`CFA 2 Factor` <- cfa(model = model2, data = dataset, estimator = "DWLS")
`CFA 3 Factor` <- cfa(model = model3, data = dataset, estimator = "DWLS")


# Extracing the metrics -------------------------------------------------------
nice_fit(`CFA 1 Factor`, `CFA 2 Factor`, `CFA 3 Factor`, nice_table = T)

Tasks:

  • Add a label argument to the function
  • Add a unit test length(label) = number of lavaan/fit objects
  • Integrate the labels in the table

Priority:

Very low, it's just a convenience thing. I could also edit the column in word, or just use _

`nice_fit()`: Error in `[.data.frame`(x, keep): undefined columns selected

A user reported the following bug:

library(lavaan)
library(lavaanExtra)

dat <- data.frame(z=sample(c(0,1), 100, replace=T),
                  x=sample(1:7, 100, replace=T),
                  y=sample(1:5, 100, replace=T))
mod <- '
y ~ a*x
z ~ b*y + c*x
ind := a*b
'
fit <- sem(mod, dat, ordered="z")
nice_fit(fit)
#> Error in `[.data.frame`(x, keep): undefined columns selected

fit <- sem(mod, dat)
nice_fit(fit)
#>     Model chisq df chi2.df pvalue cfi tli rmsea rmsea.ci.lower rmsea.ci.upper
#> 1 Model 1     0  0     NaN     NA   1   1     0              0              0
#>   srmr     aic     bic
#> 1    0 498.249 511.274

Created on 2024-06-20 with reprex v2.1.0

demonstrating the value of the plot wrappers

For review: openjournals/joss-reviews#5701

Regarding the paper:

Beginning on p. 4, there is a comparison of the default plots from lavaanPlot() and nice_lavaanPlot(), but the source code look like it only overrides a few defaults. There is nothing inherently better or worse about a "naked" path diagram. It would be helpful for users to see how much time nice_lavaanPlot() saves them by providing the lavaanPlot() syntax that would yield the same figure as provided by nice_lavaanPlot() by default.

Likewise for showing the tidySEM::graph_sem() that yields the default nice_tidySEM() plot. Judging from the source code, I imagine nice_tidySEM() saves quite some space for users!

intuition in Statement of Need

For review: openjournals/joss-reviews#5701

Regarding the paper:

The Statement of Need opens by stating lavaan's popularity, yet claims that the lavaan syntax is not very intuitive for beginners, which contradicts the stated purpose of the lavaan package numerous times by Rosseel (2012, p. 3), whose conclusion reads: "One of the main attractions of lavaan is its intuitive and easy-to-use model syntax" (p. 33). The degree to which a user finds it intuitive is clearly subjective (e.g., that varies widely across my SEM students every year), but this statement seems altogether unnecessary given that the second paragraph does not describe the modular alternative syntax as being more intuitive (which I don't think it is). The value of the alternative syntax approach seems to be the flexibility its modularity (for updating or appending models), which is not necessarily more intuitive.

Please make `DiagrammeRsvg` optional for running tests

Currently tests will fail if DiagrammeRsvg not installed:

--->  Testing R-lavaanExtra
Executing:  cd "/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_R_R-lavaanExtra/R-lavaanExtra/work/lavaanExtra-0.2.0" && /opt/local/bin/R CMD check ./lavaanExtra_0.2.0.tar.gz --no-manual --no-build-vignettes 
* using log directory ‘/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_R_R-lavaanExtra/R-lavaanExtra/work/lavaanExtra-0.2.0/lavaanExtra.Rcheck’
* using R version 4.4.0 (2024-04-24)
* using platform: powerpc-apple-darwin10.0.0d2 (32-bit)
* R was compiled by
    gcc-mp-13 (MacPorts gcc13 13.2.0_4+stdlib_flag) 13.2.0
    GNU Fortran (MacPorts gcc13 13.2.0_4+stdlib_flag) 13.2.0
* running under: OS X Snow Leopard 10.6
* using session charset: UTF-8
* using options ‘--no-manual --no-build-vignettes’
* checking for file ‘lavaanExtra/DESCRIPTION’ ... OK
* this is package ‘lavaanExtra’ version ‘0.2.0’
* package encoding: UTF-8
* checking package namespace information ... OK
* checking package dependencies ... NOTE
Package suggested but not available for checking: ‘DiagrammeRsvg’
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for executable files ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package ‘lavaanExtra’ can be installed ... OK
* checking installed package size ... OK
* checking package directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking code files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking whether startup messages can be suppressed ... OK
* checking dependencies in R code ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... OK
* checking Rd files ... OK
* checking Rd metadata ... OK
* checking Rd cross-references ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking files in ‘vignettes’ ... WARNING
Files in the 'vignettes' directory but no files in 'inst/doc':
  ‘example.Rmd’ ‘write_lavaan.Rmd’
* checking examples ... ERROR
Running examples in ‘lavaanExtra-Ex.R’ failed
The error most likely occurred in:

> ### Name: cfa_fit_plot
> ### Title: Fit and plot CFA simultaneously
> ### Aliases: cfa_fit_plot
> 
> ### ** Examples
> 
> ## Don't show: 
> if (requireNamespace("lavaan", quietly = TRUE) && requireNamespace("lavaanPlot", quietly = TRUE)) (if (getRversion() >= "3.4") withAutoprint else force)({ # examplesIf
+ ## End(Don't show)
+ x <- paste0("x", 1:9)
+ (latent <- list(
+   visual = x[1:3],
+   textual = x[4:6],
+   speed = x[7:9]
+ ))
+ 
+ HS.model <- write_lavaan(latent = latent)
+ cat(HS.model)
+ 
+ library(lavaan)
+ fit <- cfa_fit_plot(HS.model, HolzingerSwineford1939)
+ ## Don't show: 
+ }) # examplesIf
> x <- paste0("x", 1:9)
> (latent <- list(visual = x[1:3], textual = x[4:6], speed = x[7:9]))
$visual
[1] "x1" "x2" "x3"

$textual
[1] "x4" "x5" "x6"

$speed
[1] "x7" "x8" "x9"

> HS.model <- write_lavaan(latent = latent)
> cat(HS.model)
##################################################
# [-----Latent variables (measurement model)-----]

visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9

> library(lavaan)
This is lavaan 0.6-18
lavaan is FREE software! Please report any bugs.
> fit <- cfa_fit_plot(HS.model, HolzingerSwineford1939)
lavaan 0.6-18 ended normally after 35 iterations

  Estimator                                         ML
  Optimization method                           NLMINB
  Number of model parameters                        21

  Number of observations                           301

Model Test User Model:
                                              Standard      Scaled
  Test Statistic                                85.306      87.132
  Degrees of freedom                                24          24
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  0.979
    Yuan-Bentler correction (Mplus variant)                       

Model Test Baseline Model:

  Test statistic                               918.852     880.082
  Degrees of freedom                                36          36
  P-value                                        0.000       0.000
  Scaling correction factor                                  1.044

User Model versus Baseline Model:

  Comparative Fit Index (CFI)                    0.931       0.925
  Tucker-Lewis Index (TLI)                       0.896       0.888
                                                                  
  Robust Comparative Fit Index (CFI)                         0.930
  Robust Tucker-Lewis Index (TLI)                            0.895

Loglikelihood and Information Criteria:

  Loglikelihood user model (H0)              -3737.745   -3737.745
  Scaling correction factor                                  1.133
      for the MLR correction                                      
  Loglikelihood unrestricted model (H1)      -3695.092   -3695.092
  Scaling correction factor                                  1.051
      for the MLR correction                                      
                                                                  
  Akaike (AIC)                                7517.490    7517.490
  Bayesian (BIC)                              7595.339    7595.339
  Sample-size adjusted Bayesian (SABIC)       7528.739    7528.739

Root Mean Square Error of Approximation:

  RMSEA                                          0.092       0.093
  90 Percent confidence interval - lower         0.071       0.073
  90 Percent confidence interval - upper         0.114       0.115
  P-value H_0: RMSEA <= 0.050                    0.001       0.001
  P-value H_0: RMSEA >= 0.080                    0.840       0.862
                                                                  
  Robust RMSEA                                               0.092
  90 Percent confidence interval - lower                     0.072
  90 Percent confidence interval - upper                     0.114
  P-value H_0: Robust RMSEA <= 0.050                         0.001
  P-value H_0: Robust RMSEA >= 0.080                         0.849

Standardized Root Mean Square Residual:

  SRMR                                           0.065       0.065

Parameter Estimates:

  Standard errors                             Sandwich
  Information bread                           Observed
  Observed information based on                Hessian

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  visual =~                                                             
    x1                1.000                               0.900    0.772
    x2                0.554    0.132    4.191    0.000    0.498    0.424
    x3                0.729    0.141    5.170    0.000    0.656    0.581
  textual =~                                                            
    x4                1.000                               0.990    0.852
    x5                1.113    0.066   16.946    0.000    1.102    0.855
    x6                0.926    0.061   15.089    0.000    0.917    0.838
  speed =~                                                              
    x7                1.000                               0.619    0.570
    x8                1.180    0.130    9.046    0.000    0.731    0.723
    x9                1.082    0.266    4.060    0.000    0.670    0.665

Covariances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  visual ~~                                                             
    textual           0.408    0.099    4.110    0.000    0.459    0.459
    speed             0.262    0.060    4.366    0.000    0.471    0.471
  textual ~~                                                            
    speed             0.173    0.056    3.081    0.002    0.283    0.283

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .x1                0.549    0.156    3.509    0.000    0.549    0.404
   .x2                1.134    0.112   10.135    0.000    1.134    0.821
   .x3                0.844    0.100    8.419    0.000    0.844    0.662
   .x4                0.371    0.050    7.382    0.000    0.371    0.275
   .x5                0.446    0.057    7.870    0.000    0.446    0.269
   .x6                0.356    0.047    7.658    0.000    0.356    0.298
   .x7                0.799    0.097    8.222    0.000    0.799    0.676
   .x8                0.488    0.120    4.080    0.000    0.488    0.477
   .x9                0.566    0.119    4.768    0.000    0.566    0.558
    visual            0.809    0.180    4.486    0.000    1.000    1.000
    textual           0.979    0.121    8.075    0.000    1.000    1.000
    speed             0.384    0.107    3.596    0.000    1.000    1.000

R-Square:
                   Estimate
    x1                0.596
    x2                0.179
    x3                0.338
    x4                0.725
    x5                0.731
    x6                0.702
    x7                0.324
    x8                0.523
    x9                0.442

Error: Package `DiagrammeRsvg` required for this function..
  Please install it by running `install.packages("DiagrammeRsvg")`.
Execution halted
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘spelling.R’
  Running ‘testthat.R’
 ERROR
Running the tests in ‘tests/testthat.R’ failed.
Last 13 lines of output:
   5.   └─insight::check_if_installed(...)
  ── Error ('test-nice_lavaanPlot.R:72:3'): nice_lavaanPlot different sem model ──
  Error: Package `DiagrammeRsvg` required for this function..
    Please install it by running `install.packages("DiagrammeRsvg")`.
  Backtrace:
      ▆
   1. ├─testthat::expect_s3_class(...) at test-nice_lavaanPlot.R:72:3
   2. │ └─testthat::quasi_label(enquo(object), arg = "object")
   3. │   └─rlang::eval_bare(expr, quo_get_env(quo))
   4. └─lavaanExtra::nice_lavaanPlot(fit.sem2)
   5.   └─insight::check_if_installed(...)
  
  [ FAIL 6 | WARN 0 | SKIP 23 | PASS 32 ]
  Error: Test failures
  Execution halted
* checking for unstated dependencies in vignettes ... OK
* checking package vignettes ... WARNING
Directory 'inst/doc' does not exist.
Package vignettes without corresponding single PDF/HTML:
  ‘example.Rmd’
  ‘write_lavaan.Rmd’
* checking running R code from vignettes ...
  ‘example.Rmd’ using ‘UTF-8’... OK
  ‘write_lavaan.Rmd’ using ‘UTF-8’... OK
 OK
* checking re-building of vignette outputs ... SKIPPED
* DONE

Status: 2 ERRORs, 2 WARNINGs, 1 NOTE
See
  ‘/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_R_R-lavaanExtra/R-lavaanExtra/work/lavaanExtra-0.2.0/lavaanExtra.Rcheck/00check.log’
for details.

Command failed:  cd "/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_R_R-lavaanExtra/R-lavaanExtra/work/lavaanExtra-0.2.0" && /opt/local/bin/R CMD check ./lavaanExtra_0.2.0.tar.gz --no-manual --no-build-vignettes 
Exit code: 1

However, unfortunately, DiagrammeRsvg depends on V8, which is unjustifiedly heavy dependency to build and also broken on some platforms.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.