# Fall Meeting #6 - 10/11/16 - Factor Analysis Data Presentation

## Fall Meeting #6 - 10/11/16 - Factor Analysis Data Presentation

This week, Tara Hofkens presented some analyses she's been running and some questions she had about CFAs. As some basic background, she was interested in fitting a three-factor model of student-centered instruction among a large sample of teachers and students. Her slides are posted here and have more information, but she had three general questions: how to interpret modification indices, when to include covariates, and how to think about multiple levels of factors. We didn't get a chance to discuss covariates, but some notes from the discussion of the first and third issues are included below.

Modification indices can be requested in most software programs and indicate pathways or parameters in your model that you have fixed (i.e., constrained to be a certain value, often 0) that you might want to consider freely estimating. In Mplus, the output you get gives you a modification index value, which tells you how much improvement you would expect to see in chi square model fit if that pathway were free.

It's important to note that these are based on chi square, because much like the chi square model fit test can be over-powered with large sample size, these modification indices will be over-powered too. What this means in an applied sense is that the expected change in chi square might be large and the pathway that becomes free could be significant in the revised model, but it's practical significance could be low. In Tara's example with 1,200+ cases, she requested to see modification indices that were higher than 3.84. This was not a random cutoff - the critical value for a chi square distribution with one degree of freedom at alpha = .05 is 3.84, so if freeing up one pathway (hence a change of 1 degree of freedom) would change the model fit by more than 3.84 (meaning the difference in chi square would be greater than the critical value when df = 1), this means that freeing up the pathway would significantly improve model fit. One of Tara's modification indices was right above this cutoff though (3.9), but the expected parameter change, which Mplus also provides, was only .08. So if this correlation was allowed to be estimated rather than assumed to be 0, the model fit would be significantly improved, the correlation is so low that it probably doesn't hold much meaning.

This brings up a bigger issue about over-fitting and how to handle modification indices. We went through a lot of examples of how this works, but basically, you have some variation in your variables that is true or meaningful and some that is just random noise. What you're trying to do in SEM is model the meaningful variance but not necessarily the error. If you keep adding more parameters to the model, eventually you'll be able to perfectly predict the data, but you don't actually want to be able to do this! You don't want to model the idiosyncrasies in your sample because these are not theoretically relevant and thus not generalizable.

So how should we handle modification indices? It seems like there are three types of things that can happen when you get these results (and maybe some room in the middle as well). 1. You see a modification index and have an “Oh, duh!” moment because it’s a pathway that you just forgot to include - definitely include these! 2. The modification index is for a pathway that you wouldn’t expect but theoretically makes sense (maybe it’s two items that are worded similarly, or to measures reported by the same person) - depending on how strong your confidence in that theory is, but note it when you talk about the analyses. 3. Your program wants you to add parameters that make no sense, no matter how much you try to think up an explanation - if you can’t justify adding the pathway, don’t!

More generally, we talked about when to consider modification indices. They can be useful to check for those “oh duh” moments if you specify something wrong, but in general, once you have good model fit, you don’t need to continue maximizing. This goes back to the point above - you could get perfect fit if you just kept adding parameters, but your model would be essentially meaningless and not generalize at all. Once you know the fit is decent and you can trust the point estimates, unless there’s something obvious and theoretically important missing, leave it at that!

The other question we addressed was how to think about having multiple levels of latent factors. Given that most of us are in Psych, it might help to explain this with the example in Tara’s slides rather than her actual project (sorry, School of Ed folks!). In the example on slide 18, there are two ways of modeling a complex factor structure where the researchers where interested in modeling quality of life, which had several dimensions, including cognition, vitality, mental health, and disease worry. Each of these dimensions was observed through several indicators. The two ways of modeling that are shown are a bifactor model and a second order model. In the first, each indicator is predicted by two factors, the general factor and the specific factor. One way to think about this is that all the variance in the item that is related to the general factor (e.g., quality of life) is pulled out of the item, and what is left and related to the specific factor (e.g., mental health) is part of the other. In reality, this variance is being partialled simultaneous, but I think this helps (at least for me!) to think about what each factor represents. Specifically, the specific factors (on the left) are the variance unique to those, not shared with the general factor, so the “Mental Health” circle represents latent variability in the mental health items net of the latent variability in quality of life across all the items. This interpretation is a little tricky to think through though. In the model on the right, the second order factor, the latent specific factors (or dimensions of quality of life) themselves become indicators of the general factor (quality of life). Here the general factor is slightly different, as it now represents what all the factors have in common, not all the items. The specific factors are endogenous now (i.e., outcomes), so the variance in the model is residual variance, or what is left in mental health after accounting for the shared variance in quality of life (similar to the factors in bifactor model).

A few things to note - if you wanted to look at the specific factors without partialling out variance from the general factor, a third option would be to model the specific factors and allow them to correlate with one another (but they remain exogenous, unlike in the second order model!). One concern with bifactor models is that you will often have a LOT of indicators on a single factor. This sometimes can cause problems in typical CFAs (hence the discussion a few weeks ago of not feeling like you need to retain every indicator if you have a ton), but is not a concern here since you are only trying to account for part of the variance in the item, or at least it’s not a problem theoretically. Whether it works empirically is another issue! Comparing the bifactor and second order model, bifactor will always have better fit, because you just have more parameters! Conceptually this makes sense too. In the second order model, the amount of variance that each indicator contributes to the specific factor relative to the other indicators on that factor are the same. As an example, if one indicator loaded particularly strongly on mental health (maybe something like obsessive or neurotic behaviors? Disclaimer, not a clinician!), a lot of the variability would be picked up in the latent mental health factor and then further picked up by the quality of life factor even though things like obsessive behaviors may be less related to quality of life than other mental health indicators like depressive symptoms or stress. With a bifactor model, the indicators load on the factors individually, so you could account for the fact that this item would load strongly on mental health but less so on quality of life. This might not be the best example, but I’m struggling to think of a better one here...

Any questions, comments, better examples, clarifications, or corrections are appreciated!

Modification indices can be requested in most software programs and indicate pathways or parameters in your model that you have fixed (i.e., constrained to be a certain value, often 0) that you might want to consider freely estimating. In Mplus, the output you get gives you a modification index value, which tells you how much improvement you would expect to see in chi square model fit if that pathway were free.

It's important to note that these are based on chi square, because much like the chi square model fit test can be over-powered with large sample size, these modification indices will be over-powered too. What this means in an applied sense is that the expected change in chi square might be large and the pathway that becomes free could be significant in the revised model, but it's practical significance could be low. In Tara's example with 1,200+ cases, she requested to see modification indices that were higher than 3.84. This was not a random cutoff - the critical value for a chi square distribution with one degree of freedom at alpha = .05 is 3.84, so if freeing up one pathway (hence a change of 1 degree of freedom) would change the model fit by more than 3.84 (meaning the difference in chi square would be greater than the critical value when df = 1), this means that freeing up the pathway would significantly improve model fit. One of Tara's modification indices was right above this cutoff though (3.9), but the expected parameter change, which Mplus also provides, was only .08. So if this correlation was allowed to be estimated rather than assumed to be 0, the model fit would be significantly improved, the correlation is so low that it probably doesn't hold much meaning.

This brings up a bigger issue about over-fitting and how to handle modification indices. We went through a lot of examples of how this works, but basically, you have some variation in your variables that is true or meaningful and some that is just random noise. What you're trying to do in SEM is model the meaningful variance but not necessarily the error. If you keep adding more parameters to the model, eventually you'll be able to perfectly predict the data, but you don't actually want to be able to do this! You don't want to model the idiosyncrasies in your sample because these are not theoretically relevant and thus not generalizable.

So how should we handle modification indices? It seems like there are three types of things that can happen when you get these results (and maybe some room in the middle as well). 1. You see a modification index and have an “Oh, duh!” moment because it’s a pathway that you just forgot to include - definitely include these! 2. The modification index is for a pathway that you wouldn’t expect but theoretically makes sense (maybe it’s two items that are worded similarly, or to measures reported by the same person) - depending on how strong your confidence in that theory is, but note it when you talk about the analyses. 3. Your program wants you to add parameters that make no sense, no matter how much you try to think up an explanation - if you can’t justify adding the pathway, don’t!

More generally, we talked about when to consider modification indices. They can be useful to check for those “oh duh” moments if you specify something wrong, but in general, once you have good model fit, you don’t need to continue maximizing. This goes back to the point above - you could get perfect fit if you just kept adding parameters, but your model would be essentially meaningless and not generalize at all. Once you know the fit is decent and you can trust the point estimates, unless there’s something obvious and theoretically important missing, leave it at that!

The other question we addressed was how to think about having multiple levels of latent factors. Given that most of us are in Psych, it might help to explain this with the example in Tara’s slides rather than her actual project (sorry, School of Ed folks!). In the example on slide 18, there are two ways of modeling a complex factor structure where the researchers where interested in modeling quality of life, which had several dimensions, including cognition, vitality, mental health, and disease worry. Each of these dimensions was observed through several indicators. The two ways of modeling that are shown are a bifactor model and a second order model. In the first, each indicator is predicted by two factors, the general factor and the specific factor. One way to think about this is that all the variance in the item that is related to the general factor (e.g., quality of life) is pulled out of the item, and what is left and related to the specific factor (e.g., mental health) is part of the other. In reality, this variance is being partialled simultaneous, but I think this helps (at least for me!) to think about what each factor represents. Specifically, the specific factors (on the left) are the variance unique to those, not shared with the general factor, so the “Mental Health” circle represents latent variability in the mental health items net of the latent variability in quality of life across all the items. This interpretation is a little tricky to think through though. In the model on the right, the second order factor, the latent specific factors (or dimensions of quality of life) themselves become indicators of the general factor (quality of life). Here the general factor is slightly different, as it now represents what all the factors have in common, not all the items. The specific factors are endogenous now (i.e., outcomes), so the variance in the model is residual variance, or what is left in mental health after accounting for the shared variance in quality of life (similar to the factors in bifactor model).

A few things to note - if you wanted to look at the specific factors without partialling out variance from the general factor, a third option would be to model the specific factors and allow them to correlate with one another (but they remain exogenous, unlike in the second order model!). One concern with bifactor models is that you will often have a LOT of indicators on a single factor. This sometimes can cause problems in typical CFAs (hence the discussion a few weeks ago of not feeling like you need to retain every indicator if you have a ton), but is not a concern here since you are only trying to account for part of the variance in the item, or at least it’s not a problem theoretically. Whether it works empirically is another issue! Comparing the bifactor and second order model, bifactor will always have better fit, because you just have more parameters! Conceptually this makes sense too. In the second order model, the amount of variance that each indicator contributes to the specific factor relative to the other indicators on that factor are the same. As an example, if one indicator loaded particularly strongly on mental health (maybe something like obsessive or neurotic behaviors? Disclaimer, not a clinician!), a lot of the variability would be picked up in the latent mental health factor and then further picked up by the quality of life factor even though things like obsessive behaviors may be less related to quality of life than other mental health indicators like depressive symptoms or stress. With a bifactor model, the indicators load on the factors individually, so you could account for the fact that this item would load strongly on mental health but less so on quality of life. This might not be the best example, but I’m struggling to think of a better one here...

Any questions, comments, better examples, clarifications, or corrections are appreciated!

- Attachments

**LeanneElliott**- Moderator
- Posts : 11

Reputation : 2

Page

**1**of**1****Permissions in this forum:**

**cannot**reply to topics in this forum