[Minutes] Fall Meeting # 8 - Cross-Lagged Panel Models - 10/25/16

View previous topic View next topic Go down

[Minutes] Fall Meeting # 8 - Cross-Lagged Panel Models - 10/25/16

Post by LeanneElliott on Tue Oct 25, 2016 10:33 pm

Aidan gave a great overview of cross-lag models today. Rather than a full narrative of the presentation (since the slides have a lot of the necessary detail), I have a list of some of the more complicated topics that we discussed in more deeply than just what's on the slides.

- Autoregressive effects: These tell you about the relative stability of a construct over time by regressing Y at time 2 on Y at time 1. A coefficient (or correlation, if you are just looking at the bivariate relationship) of 1 tells you that individuals perfectly maintain their relative position with reference to the rest of the group. A coefficient of less than 1 tells you that EITHER individuals change OR there is measurement error, which can deflate the estimated associations. With observed variables, you can't tell how much of the unexplained variation is true change and how much is due to measurement error. These are also different from simple difference scores, where you subtract Y at time 1 from Y at time 2 (although this is the equivalent of setting the autoregressive pathway to 1). Instead, autoregressive effects model residualized change, or change controlling for initial status. If you want to account for a potential correlation between change and initial standing (such as if you have regression to the mean, where people with more extreme scores tend to have less extreme scores later), you need to use the autoregressive effect, not just the change score!

- Measurement error: As mentioned above, residualized change with observed variables tells you about change in the construct but also captures measurement error. If some Y was observed with .8 reliability, even if the underlying construct showed no change, individuals would be shuffled because this error randomly (although slightly) influences individuals' scores. One way to get around this problem is to use latent variables, but there are a few things to keep in mind when doing so. First, you need to make sure your factors are invariant (i.e., the loadings are the same over time). It is also a good idea to let the error terms for the same indicator at multiple time points correlate. If one indicator offers a unique component that is shared over time, allowing these residuals to correlate captures the shared aspect of this item (e.g., eating habits could be an indicator of depression but might be stable for reasons other than depression). Without allowing for this covariance, this stability that is not part of the construct of interest would be included in the autoregressive pathway (beta21 in the slides), because there is no other way in the model that the observed variables can relate to one another.

- Interpreting diagrams: In the SEM diagrams we discussed, the residual of n2 (after being regressed on n1) is the residualized change of interest. If this is significant, this means people are shuffling around in their relative positioning. Beta21, on the other hand, is the stability coefficient. One interesting consideration here is that you can have both significant autoregressive pathways and residual variance (and often will). The interpretation with respect to theory is somewhat ambiguous here though. Aidan gave an example of findings in the literature that the autoregressive coefficient of personality in adulthood is around .8, and some people argue that this means personality is stable (e.g., "look at this large, significant coefficient") while others use the same results to argue for change in personality (e.g., "it's not a perfect autocorrelation, so there is some change!").

- Cross-lag associations: You can add other predictors of the time 2 outcome other that just prior measures! This is the basic idea behind cross-lag models. When doing so, this X variable is predicting Y at time 2, net of Y at time 1. In other words, X is predicting the residuals in Y at time 2 after accounting for Y at time 1. It is important to note, though, that the autoregressive pathway is also being estimated with this in mind; Y at time 1 is predicting Y at time 2 net of X. 

- Full cross-lag panel models: You can have two constructs that each have autoregressive pathways and cross lags from time 1 on one construct to time 2 on the other. In addition to all the concerns described above, in these models you should also always include the residual correlations of constructs measured at the same time point (so after accounting for X at time 1 and Y at time 1, the residuals of X and Y at time 2 can correlate with one another). This captures the shared change (or how change in one construct relates to change in another) and is necessary to ensure that you aren't capturing this covariance in the autoregressive or cross lag paths and thus inflating these estimates. You can think of these covariances as "shared innovation." If some external factor increases both X and Y from time 1 to time 2 (maybe you are looking at stress and depression and you experience a sudden death of a family member that would increase both), this covariance can address these "shocks to the system," as well as less extreme examples. 

- Granger causality: What we're doing with cross-lag models is NOT looking at change predicting change (unless you are interpreting the residual correlations described in the previous bullet) but rather how level of a variable at time 1 predicts change in another from time 1 to time 2.

- Adding more waves of data: With more than two time points, you can test for more than just the autoregressive and cross-lag pathways. For example, you can test for stationarity (e.g., is the same process unfolding over time, or are the pathways of the same magnitude from time 1 to time 2 as from time 2 to time 3) and equilibrium (e.g., is the same amount of variance explained at each time point, or are the residuals the same over time). In order to test for each of these though, you need to have equal spacing of your time points. In other words, if the lag between time 2 and time 3 is twice as long as the lag between time 1 and time 2, you wouldn't expect to have the same autoregressive pathway or residualized change at time 2 and time 3. Further, you can actually test model fit at the structural level! When you only have two time points, you've modeled all the possible relations between the latent factors (e.g., X1 is related to X2, Y1 is related to Y2, X1 is related to Y2, Y1 is related to X2, Y1 is related to X1, and Y2 is related to X2). You will still get model fit information, but the df come from the measurement model and really only tell you about how well the latent factors are defined. One thing to keep in mind with extra time points, though, is that the pathways from time 2 to time 3 are STILL NOT change predicting change. It's almost like the model resets; from time 1 to time 2 you estimate the residualized change and residual covariances, but then from time 2 to time 3 you are looking at ALL of the variance in time 2 measures, not just the residuals. This mostly has to do with the specifics of SEM (correlations are just with residuals, regression paths pull the whole variance of the variable) but is important to note for interpretation regardless!
avatar
LeanneElliott
Moderator

Posts : 15
Reputation : 2

View user profile

Back to top Go down

Slides

Post by LeanneElliott on Thu Oct 27, 2016 2:02 pm

Here are Aidan's slides from this Tuesday as well!
Attachments
Cross-Lagged SEM2.pptx You don't have permission to download attachments.(302 Kb) Downloaded 11 times
avatar
LeanneElliott
Moderator

Posts : 15
Reputation : 2

View user profile

Back to top Go down

McArdle (2009) Paper

Post by JamieAmemiya on Fri Oct 28, 2016 2:34 pm

Here is the McArdle (2009) paper that has a nice review of longitudinal analyses that can be modeled within an SEM framework.
Attachments
McArdle - 2009 - Latent Variable Modeling of Differences and Changes with Longitudinal Data.pdf You don't have permission to download attachments.(593 Kb) Downloaded 5 times
avatar
JamieAmemiya
Moderator

Posts : 16
Reputation : 3
Location : 5704 WWPH

View user profile

Back to top Go down

means/intercepts in the autoregressive model

Post by CatieWalsh on Mon Nov 14, 2016 4:10 pm

Hey guys, 
I'm wondering if anyone can help me with interpretation of the means/intercepts in the autoregressive model. It seems reflective of absolute change, but does not give the same estimates as the models I have done for absolute change. I have both latent variable autoregressive models and observed variable autoregressive models. 
Thanks!
Catie
avatar
CatieWalsh
Admin

Posts : 29
Reputation : 0
Location : OEH sometimes, but mostly wandering around Sennott

View user profile

Back to top Go down

Re: [Minutes] Fall Meeting # 8 - Cross-Lagged Panel Models - 10/25/16

Post by JamieAmemiya on Wed Nov 16, 2016 4:44 pm

Hi Catie!

Let me know if this answers your questions:

You'll get an estimate of the "mean" when there is nothing predicting the variable. The mean value will be identical to the observed mean in the descriptive statistics if you have no missing data. 

Once you have predictors, though, you now get an estimate of the "intercept." As in OLS regression, this is the average value of your variable for people who have zeroes on all of the predictors of the variable.

Hope this helps!!  cyclops
avatar
JamieAmemiya
Moderator

Posts : 16
Reputation : 3
Location : 5704 WWPH

View user profile

Back to top Go down

View previous topic View next topic Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum