Giter VIP home page Giter VIP logo

synthdid's Introduction

synthdid: Synthetic Difference in Differences Estimation

Build Status

This package implements the synthetic difference in difference estimator (SDID) for the average treatment effect in panel data, as proposed in Arkhangelsky et al (2019). We observe matrices of outcomes Y and binary treatment indicators W that we think of as satisfying Yij = Lij + τij Wij + εij. Here τij is the effect of treatment on the unit i at time j, and we estimate the average effect of treatment when and where it happened: the average of τij over the observations with Wij=1. All treated units must begin treatment simultaneously, so W is a block matrix: Wij = 1 for i > N0 and j > T0 and zero otherwise, with N0 denoting the number of control units and T0 the number of observation times before onset of treatment. This applies, in particular, to the case of a single treated unit or treated period.

This package is currently in beta and the functionality and interface is subject to change.

Some helpful links for getting started:

Installation

The current development version can be installed from source using devtools.

devtools::install_github("synth-inference/synthdid")

Example

library(synthdid)

# Estimate the effect of California Proposition 99 on cigarette consumption
data('california_prop99')
setup = panel.matrices(california_prop99)
tau.hat = synthdid_estimate(setup$Y, setup$N0, setup$T0)
se = sqrt(vcov(tau.hat, method='placebo'))
sprintf('point estimate: %1.2f', tau.hat)
sprintf('95%% CI (%1.2f, %1.2f)', tau.hat - 1.96 * se, tau.hat + 1.96 * se)
plot(tau.hat)

References

Dmitry Arkhangelsky, Susan Athey, David A. Hirshberg, Guido W. Imbens, and Stefan Wager. Synthetic Difference in Differences, 2019. [arxiv]

synthdid's People

Contributors

awhitefield8 avatar davidahirshberg avatar erikcs avatar petratschuchnig avatar swager avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synthdid's Issues

Confusion about synthdid_plot

What does the shaded red region at the bottom of the graph from synthdid_plot represent in the pre-treatment period?

event study graph similar to 4b in SDID Stata publication (page 25)

Thank you for this fantastic package. I noticed that the more recent Stata version of SDID also shows how to do an event study graph that is often helpful (graph 4b on page 24, here: https://arxiv.org/abs/2301.11859 ).

I would love to reproduce this graph in R too. Before re-inventing the wheel, I was curious if
a) .. there is any feature/ plot that may automatically create this (would also be a nice feature) in R?
b) .. someone else has written some code or an example?

Many thanks
NN

typo in jackknife V.hat computation?

You have:
V.hat <- (N - 1) * mean((est_jk - est)^2).

Shouldn't we be doing something like:
V.hat <- (N / (N - 1)) * mean((est_jk - est)^2),

or:
V.hat <- (1 / (N - 1)) * sum((est_jk - est)^2)?

How to interpret the result?

Hi - I ran the code for the California 99 example and got the result:
estimate: -15.60383
standard error: 9.830671
"95% CI (-34.87, 3.66)"

May I ask if it means the result is not statistically significant because CI includes zero? Thanks

Integer values for dates in synth did plot

Hi,
I made a graph representing my synthdid estimate, that looks as follows:
download
Would it be possible to have the year taking integer values on the horizontal axis?
Many thanks.

Multiple outcomes and panel.matrices

Hi all,

I would like to use synthdid_estimate with 10 or so outcome variables. Is there a way to use panel.matrices such that I can create a matrix with multiple outcomes? I'd like to be able to do something like...

synthdid_estimate(setup$Y1, setup$N0, setup$T0) synthdid_estimate(setup$Y2, setup$N0, setup$T0)

without creating a bunch of different matrices. Any subsequent advice on how to loop/list synthdid_estimate would also be appreciated.

Thanks!

Minor documentation issues

I'm interested in using SDID in one of my current projects. I really appreciate you all providing the implementation in an R package!

I saw @davidahirshberg mentioned issues with the OSQP and SCS solvers. is your recommendation to use always use ECOS for SDID estimation?

I ran devtools::check() on the package and noticed a few places it was unhappy with the documentation or packaging. Would it be helpful if I submitted a pull request for these issues?
If so, do you want to add a license to the package?

Thanks!

synthdid_plot: adding headers?

Is there a way to add headers (DID, SC, and SDID) using synthdid_plot? I could do it manually in my text editor, but it would be nice to be able to do it in R itself.

Thanks!

Sensitivity to the scaling of variables

When I ran synthdid_estimate() with covariates, estimated weights blew up to be something more than 1e+6. So, I tested the function with a much simpler and reproducible example using the california_prop99 data.

First with two covariates x1 and x2 that range from 0 to 1.

library(synthdid)
library(data.table)
library(tidyverse)
library(abind)

#=== get the data ===#
data('california_prop99')

#=== generate two covariates ===#
data <- 
  data.table(california_prop99) %>% 
  .[, x1 := runif(nrow(.))] %>% 
  .[, x2 := runif(nrow(.))]

#=== set up ===#
setup <- 
  synthdid::panel.matrices(
    data,
    unit = 1,
    time = 2,
    outcome = 3,
    treatment = 4
  )

#=== create X 3-D array ===#
X_mat <- 
  data[, .(State, Year, x1, x2)] %>% 
  melt(id.var = c("State", "Year")) %>% 
  #=== dataset by variable ===#
  nest_by(variable) %>% 
  mutate(X = list(
    dcast(data.table(data), State ~ Year, value.var = "value") %>% 
    #=== order the observations to match that of sdid_setup$Y ===#
    .[data.table(State = rownames(setup$Y)), on = "State"] %>% 
    .[, State := NULL] %>% 
    as.matrix() 
  )) %>% 
  .$X %>% 
  #=== list of matrices to 3-D array of N * T * C ===#
  # C: number of covariates
  abind(along = 3) 

#=== estimate ===#
tau_hat <- synthdid_estimate(
  setup$Y, 
  setup$N0, 
  setup$T0,
  X = X_mat
)

#=== plot ===#
plot(tau_hat)

Screen Shot 2021-09-17 at 4 33 30 PM

So, this works.

Now, one of the covariates ranging from 0 to 100.

set.seed(23894)

#=== generate two covariates ===#
data <- 
  data.table(california_prop99) %>% 
  .[, x1 := 100 * runif(nrow(.))] %>% 
  .[, x2 := runif(nrow(.))]

#=== set up ===#
setup <- 
  synthdid::panel.matrices(
    data,
    unit = 1,
    time = 2,
    outcome = 3,
    treatment = 4
  )

#=== create X 3-D array ===#
X_mat <- 
  data[, .(State, Year, x1, x2)] %>% 
  melt(id.var = c("State", "Year")) %>% 
  #=== dataset by variable ===#
  nest_by(variable) %>% 
  mutate(X = list(
    dcast(data.table(data), State ~ Year, value.var = "value") %>% 
    #=== order the observations to match that of sdid_setup$Y ===#
    .[data.table(State = rownames(setup$Y)), on = "State"] %>% 
    .[, State := NULL] %>% 
    as.matrix() 
  )) %>% 
  .$X %>% 
  #=== list of matrices to 3-D array of N * T * C ===#
  # C: number of covariates
  abind(along = 3) 

#=== estimate ===#
tau_hat <- synthdid_estimate(
  setup$Y, 
  setup$N0, 
  setup$T0,
  X = X_mat
)

#=== plot ===#
plot(tau_hat)

Screen Shot 2021-09-17 at 4 36 16 PM

If you check weights,

attr(tau_hat, "weights")$vals[1:3]

Screen Shot 2021-09-17 at 4 40 11 PM

So, it seems to me that sc.weight.fw.covariates() in sovler.R is quite sensitive to the scaling of covariates at the moment. It would be great if this issue is fixed.

Some confusion about a weight

Hi,

I tried the california_prop99 example and it worked as in the paper.
However,a small question confused myself.I don't understand the meaning of the weights in the example, using this dataset as an example, is it the extent to which the implementation of this policy affects different states? The higher the weight, the greater the impact of this policy on this state? Thanks! ~

Implementation in Stata

Hi, is it possible to get these codes working in Stata?
Alternatively, what would be the interpretation of the graph generated? Is this usable for DID analysis?

Thanks

QUESTION: Adding covariets/controls

Hi there!

is it possible to add additional covariates into estimators? For example, replicating the Germany's unification study by Abadie but by using synthdid ? I saw stata implementation here. There by using covariates keyword one can achieve that.

thank you!

Staggered adoption

Hi!
Thanks for this useful package!
I have some difficulties in estimating tau_hat and se for staggered adoption case.
I read Athey S, Imbens G W. Design-based analysis in difference-in-differences settings with staggered adoption[J]. Journal of Econometrics, 2021.
The authors suggest that researchers can get weighted average estimator for subsamples of each adoption period. But I didn't figure out how to implement that in R. (I am a total beginner in R and also new to Github)
Do you have any suggestions? Thanks!

Retrieving estimated counterfactual outcome

Hi,

The plots generated by synthdid_plot() show treatment effect estimates, averaged out across all post-treatment periods. These estimates equal the difference between the observed outcome in the treated unit and its estimated counterfactual outcome. Is there a way to easily retrieve this counterfactual?

Many thanks!

Leda

Large estimates

After including controls the level of my estimate jumps somewhere in the realm of 6e+20. The outcome I am looking at is variance of GDP, which is large but not to that order of magnitude. Before controls it is more reasonably 1e+06. Any help?

Removing arrow

Hello everyone,

I am working and plotting synthetic DIDs in R.
But, I find the arrow is not really useful for my graph.

Does anyone know of an option to remove the arrow?
For me, at this point and after much research, it is impossible, but I am sure someone has an idea.

Thanks a lot,

Flo

Summary(model) runs pretty slow

Hello, the summary function runs quite slow if I do:

model <- synthdid_estimate(setup$Y, setup$N0, setup$T0)
summary(model)

panel.matrices error: no variation in treatment status

Hi,

I got the There is no variation in treatment status error when converting the data to matrix. But there is treatment status change from 0 to 1 for some individuals and also a control group with all 0 values in the treated column. I'm not sure why I'm still getting this error.

# Convert the data into a matrix setup = panel.matrices(modified_data, unit = 'id', time = 'week_number', outcome = 'revenue', treatment = 'treated')

Any help is appreciated. Thx in advance.

permutation method for SDID

Hello, SDID community. In the original SDID paper page 29, authors proposed Algorithm 4: Placebo Variance Estimation to calculate p value in case where treated unit is few.
In Abadie's Synthetic control method, they proposed a permutation method for calculating p value. "A permutation distribution can be obtained by iteratively reassigning the treatment to the units in the donor pool and estimating “placebo effects” in each iteration. Then, the permutation distribution is constructed by pooling the effect estimated for the treated unit together with placebo effects estimated for the units in the donor pool. The effect of the treatment on the unit affected by the intervention is deemed significant when its magnitude is extreme relative
to the permutation distribution.
In my research, I calculated the p value using both place variance estimation and modified permutation method. The SDID method allows a gap between sdid predict and true value, donated as alpha in page 4 equation 1.2. I find the sdid predict and the minus the gap. Then, I can perform permutation test as Abadie's permutation test. See graph 2 and 3 where I first graph the All Employee, Arts, Entertainment, and Recreation in Louisiana and its SDID predict. In graph 3, I minus sdid predict with the gap alpha and graph again.
I use SDID to estimate the impact of Hurricane Katerina on the labor market by sector. In the first attached, LA_RE represents monthly employment of Real Estate. column 2 is the estimate, column 3 "Group" is the end period. I set treatment time in 20050801. Using Algorithm 4: Placebo Variance Estimation I found column 4 SE, if I divide column 2 by SE, beside first raw, it will always produce high T value in a gaussian framework. then I use Abadie's permutation method to find p value in column 10, it looks all good too.
I used this method successfully to detect impact on many sectoral employment. But in one sector, it comes with problem. For All Employee, Arts, Entertainment, and Recreation in Louisiana, you can see from graph 2 and 3, it has sharp decline and from attached 4, despite standard error is low in most of raw with respect to estimate, the p value calculated by permutation method is very high. The reason is because SDID can't find a very good sdid, that is "sdid predict - gap" has a huge pre Mean square predict error range from 3 to 5, as shown in column 6 attached 4. In fact, using this method, I found for this variable, SDID produced one of highest MSE_pre. See attached 5.

Here are my question:
1, We can't use estimate divide by standard error to find it t value? right? apparently, we do not assume gaussian distribution.
2, Do you think this modified permutation test for sdid is valid?
3, In this case some variable, SDID can't produce low pre_MSE, what can we do?
4, It seems, I always produce a SDID that is below the observed value, how can I change the default?
5, Using the permutation test, by reassigning treatment status for every comparison group, I can get estimate and standard error for every units. Then how I compare the standard error from all units?

Screen Shot 2021-12-30 at 12 41 19 AM

Screen Shot 2021-12-30 at 1 19 40 AM

Screen Shot 2021-12-30 at 1 19 57 AM

Screen Shot 2021-12-30 at 12 49 35 AM

Screen Shot 2021-12-30 at 12 57 44 AM

NH AER

Extracting data behind plot

Dear,

The command plot(tau.hat) shows a great summary of the estimate in a plot. I would like to access the data behind the plot, so I can export it as a dataframe. However, I cannot seem to find a way to do that. Is there a way to get the results depicted in the plot in a dataframe? With per year:

  • Average outcome of the treatment group
  • Average outcome of the synthetic control group
  • Weights assigned to each year

The reason I want to extract the data from the plot to a dataframe, is because I have to export the data to another environment where I make the plot.

Kind regards

Error in synthdid_placebo_plot()

Hi, thanks for creating and maintaining this package!

I keep getting an error in synthdid_placebo_plot(). I am using the newest version from GitHub. What does "argument 5 is empty" mean? Is there an error in the code or am I doing anything wrong in using synthdid_placebo_plot()?

library(synthdid)

# Estimate the effect of California Proposition 99 on cigarette consumption
data('california_prop99')
setup = panel.matrices(california_prop99)
tau.hat = synthdid_estimate(setup$Y, setup$N0, setup$T0)
se = sqrt(vcov(tau.hat, method='placebo'))
sprintf('point estimate: %1.2f', tau.hat)
#> [1] "point estimate: -15.60"
sprintf('95%% CI (%1.2f, %1.2f)', tau.hat - 1.96 * se, tau.hat + 1.96 * se)
#> [1] "95% CI (-34.89, 3.68)"
plot(tau.hat)

# synthdid_placebo_plot() issues an error
synthdid_placebo_plot(tau.hat)
#> Error in list(Y = setup$Y[, 1:setup$T0], N0 = setup$N0, T0 = placebo.T0, : argument 5 is empty

Examining Treatment Heterogeneity

If I suspect treatment may be heterogeneous (i.e the treatment may vary between men and women), I'm not sure how I'd examine it here. In normal DiD, I've typically seen a triple interaction being used to examine heterogeneity, but I don't know how to do something similar using SDiD.

How to calculate se using bootstrap method

When I run se = sqrt(vcov(tau.hat, method='bootstrap')) i get NA, the only method that seems to work is the 'placebo'.

I also tried the following:
vcov.synthdid_estimate(tau.hat,method = "bootstrap", replications = 200)

Build is Broken?

On Mac OS X version 10.15.6, R version 4.0.2 I run into the following error when trying to install according to the provided instructions:

Error : (converted from warning) /private/var/folders/34/4z3y84xd4t30hk393dnhg8wr0000gn/T/Rtmpw36Iaq/R.INSTALLaf1169d96c9a/synthdid/man/synthdid_units_plot.Rd:20: unknown macro '\item'
ERROR: installing Rd objects failed for package ‘synthdid’

Inconsistent plot when adding covariates

Hi,

I have added covariates as an N X T X C array when using the synthdid_estimate(), sc_estimate() and did_estimate() functions. The rows and columns of this 3-D array are ordered in the same way as those of matrix setup$Y. Nevertheless, when I use synthdid_plot() to plot the results of all three methods (DID, SC, SDID), the treated outcome is not the same in each plot (see attached image). Why does this difference exist? Shouldn't all three plots display exactly the same treated outcome as this is an observed rather than an estimated outcome?

plot

Thanks a lot!

The coefficient for the regularization term

Hi, thanks for the interesting paper!

In line 43 of solver.R, eta is defined as N0 * Re(zeta^2). If I understand correctly, this corresponds to the coefficient of the regularization term in Equation (4) of the paper, which says that eta = T0 * zeta^2.

Do I miss something or is it a typo? And if possible, could you please provide some intuition about the choice on this coefficient? (I guess it is based on the asymptotic results reported in Equation 24)
Thanks!

Getting The treatment status should be in 0 or 1 error

I'm trying use the panel.matrices function like so:
setup <- panel.matrices(as.data.frame(parl),
unit = which(names(parl) == 'cntry'),
time = which(names(parl) == 'time'),
outcome = which(names(parl) == 'trnt'),
treated = which(names(parl) == 'treated'))

When I run this it gives: Error in panel.matrices(as.data.frame(parl), unit = which(names(parl) == :
The treatment status should be in 0 or 1.

I have checked my data and the values of treated are either 0 or 1
Below is my data

parl.csv

Question about weights

Hi,

I tried the california_prop99 example and it worked as in the paper.
However, I am a little confused about the weights.
In the results, synthdid returned controls weights and periods weights but not for all control units and not for all time periods.
I wonder what are the weights for other control units and time periods. Just keep them as how they are originally in the dataset?

For example, for periods weights, I got
> print(summary(tau.hat)$periods)
estimate 1
1988 0.427
1986 0.366
1987 0.206

So for other years before treatment (1970 - 1985), the weights are 1?

Thank you!

error in synthdid_placebo

Hi, there may be a typo in the synthdid_placebo function here:
On line 155, the following argument has a loose comma at the end that may be causing issues:
list(Y=setup$Y[, 1:setup$T0], N0=setup$N0, T0=placebo.T0, X=setup$X[, 1:setup$T0,], <<<missing argument here?>>> )

error:Ignoring unknown aesthetics: frame

plot(tau.hat)
Ignoring unknown aesthetics: frameIgnoring unknown aesthetics: frameIgnoring unknown aesthetics: frameIgnoring unknown aesthetics: frameIgnoring unknown aesthetics: frameIgnoring unknown aesthetics: frameguides(<scale> = FALSE) is deprecated. Please use guides(<scale> = "none") instead.

How can i solve this problem?

Raymond

panel.matrices throws error if panel argument is a tidyverse tibble

Just a short notice. The panel.matrices function seems to throw an error if the passed panel object is a tibble (the tidyverse equivalent of a data frame). Here is a simple example with resulting error message:

data('california_prop99')
library(dplyr)
setup = panel.matrices(as_tibble(california_prop99))

Error in panel.matrices(as_tibble(california_prop99)) : 
  There is no variation in treatment status.

Since tidyverse is very popular and certain dplyr operations tend to convert data frames into tibbles, this error may be encountered by some users. A simple fix would probably be to start the panel.matrices function with a call panel = as.data.frame(panel). At least the following code works fine:

data('california_prop99')
library(dplyr)
setup = panel.matrices(as.data.frame(as_tibble(california_prop99)))

testing invariances

In the paper, we claim several types of invariance for the methods implemented here:

  • Invariance to column fixed effects (for all methods): Re-mapping Yit <- Yit + bt for arbitrary bt doesn't change anything (either weights or CI estimates).
  • Invariance to row fixed effects (for DID and SDID): Re-mapping Yit <- Yit + ai for arbitrary ai doesn't change anything (either weights or CI estimates).
  • Invariance to scaling (all methods): Re-mapping Yit <- c * Yit for some c > 0 doesn't change weights, and makes the treatment effect estimate be multiplied by c. (For numerical robustness, it'd be nice to check that this goes through with both very big and very small c, e.g., c = 10^{+/- 6}.)

It would be valuable to have tests that check that these invariances are enforced.

In some cases, specific choices of tuning parameters may interfere with invariance. In this case, however, we should make sure that all methods with defaults (i.e., without tuning parameters explicitly specified) respect the invariances.

devtools::load_all error

From an R session in the directory including the README:

devtools::load_all('.')
Loading synthdid
Error in file(filename, "r", encoding = encoding) :
cannot open the connection
In addition: Warning message:
In file(filename, "r", encoding = encoding) :
cannot open file 'setup.R': No such file or director

Is there a missing file?

API for exposed units.

The current API requires the user to pass in data in a specific order: First N0 units are unexposed (controls), and the remaining N - N0 weights are exposed (treated after T0). This requires the user to do extra pre-processing.

We should do the following instead: We have a boolean (or 0/1) argument exposed in all methods that the user can use as they wish to denote which units are exposed. So they could use exposed = 1:n > N0 to emulate current behavior, but can also pass in rows in other order if that's more convenient.

Computation time

I use the library synthdid by calling R code from a Jupyter notebook (using rpy2).
I recently updated the R version in anaconda and it turns out that computations are much slower. In the end, results are close, but there are more units selected in the control group.
Do you have an idea of:

  1. why computations are now slower?
  2. how I could speed up computation (perhaps by tuning parameters in the synthdid_estimate function) ?
    Many thanks!

Code internal: deprecated use of if (class(estimates) == "synthdid_estimate")

Several lines of code use if (class(estimates) == "synthdid_estimate"), whereas it is recommended instead to run if(inherits(estimates, "synthdid_estimate")) (this will even show up on a R CMD CHECK run. In my case, this will create an error when writing a supra class for a sDiD output.

Is there any chance you can simply change if (class(estimates) == "synthdid_estimate") to if(inherits(estimates, "synthdid_estimate")) in: synthdid/R/plot.R and synthdid/R/summary.R?

I am happy otherwise to submit a pull request, though three requests are pending approval since a few months?

Thanks!

Question about group specific control group

Hello all,

I want to assign specific control groups (C_a, C_b) to different treatment groups (T_a, T_b).
For instance, the donor pool of T_a is only C_a.
I think I can have two options. For both cases, I need some help.

One is running sdid by group level and get average. I think ATT will be weighted averge of tau_a and tau_b. But I don't know whether and how in this way I can get standard error properly.

Second would be to modify sdid code to give weights only certain units. I have no idea where to start. It would be great if someone can help me out :)

Thank you so much,
JK

DID with a single control unit

Your DID model uses all available control units. Nevertheless, for the policy change I'm studying, it seems more intuitive to compare my treated unit to a single control unit (both share very similar characteristics that stand out from the others).
Unfortunately, when I use the function did_estimate() using a subset of my data with only my treated unit and the single control unit I have chosen, I get the error: Error in apply(Y[1:N0, 1:T0], 1, diff) : dim(X) must have a positive length
Is there a way to solve this issue? I am not using covariates.
I am aware that I will not be able to use the placebo approach to estimate the standard error of my DID estimate. But I would still like to show a DID plot together with the SC and SDID plots (as you do in your paper) and the average DID treatment effect even its standard error is missing. Thanks a lot!

panel.matrices complains when treatment is stored as double instead of integer?

If the original data contains the treatment variable stored as double instead of integer, panel.matrices will return multiple warnings: Warning in FUN(newX[, i], ...): coercing argument of type 'double' to logical.

Solution is simply to cast as integer, but maybe this could be done internally, or have an explicit message?

Thanks!

library(synthdid)

data('california_prop99')
setup = panel.matrices(california_prop99)

california_prop99$treated <- as.double(california_prop99$treated)
setup = panel.matrices(california_prop99)
#> Warning in FUN(newX[, i], ...): coercing argument of type 'double' to logical

One more invariance test

Could we please also add the following invariance test to SDID: If we add a constant c to all exposed units with t > T_0 (i.e., to all treated cells), then treatment effects go up by c. This invariance should hold for all methods considered (DID, SC, SDID).

Synthdid estimate with covariates

Hi,
I would like to compute synthdid estimates while considering covariates in the regression.
Could you please give some details about the shape of the X argument in synthdid_estimate? Specifically, should the units be ordered and if it is the case how?
Moreover, I noted while browsing the code that you intend to allow the panel.matrices to support covariates, which would be very convenient. Do you have an idea of the timeframe?
Finally, could it be possible to have more details about the way covariates are handled in the synthdid algorithm, in particular concerning the construction of the units weights.
Thanks a lot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.