Verfasste Forenbeiträge
-
AutorBeiträge
-
adminAdministrator
Online Sportwetten sind die beste Option für Personen, die daran interessiert sind, es zum ersten Mal zu versuchen. Wenn Sie selbst noch nie Online-Sportwetten ausprobiert haben, fehlt es Ihnen an viel Spaß und Vergnügen, auf esportswettenz.com und das alles kann in den komfortablen Umgebungen Ihrer Immobilie passieren! Das Kunstwerk des Sportwettens kann auf den ersten Blick kompliziert erscheinen, aber eine Person wird mit einigen Begriffen vertraut sein, die verwendet werden, um die Ideen und Argumente für jede Art von Vermutung zu erklären. Es ist alles viel einfacher zu verstehen.
Mai 24, 2019 um 10:57 am Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #896adminAdministratorHello Timo,
oh, I see! Yes, that’s why I thought nothing had happened – the second ML solution is the same as the first one, probably only so slightly different that rounding leads to the same solution. Great, I’m glad that I can work with these results now.
Thank you so much for your help!
Cheers,
ElisabethMai 24, 2019 um 10:53 am Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #895adminAdministratorHi Elisabeth,
When switching between ML-solutions, the title line should change (saying something like „Maximum Likelihood Estimate (best)“, with the „best“ being changed for the other solutions. You can also manually choose the solution by selecting „Estimation -> Select Estimate“; if only one ML solution is shown there (after giving the model some time to find potential alternatives) or if the shown ones have the same parameter values, all’s fine, and you can work with the results.
> What would it mean for my sample if Onyx tells me it’s overspecified only based on the sample?
potentially nothing; as the test for overspecification is numerical only, it may simply be wrong. If you get different solutions with the same fit value, then you may want to investigate what the differences are (e.g., if you find that the loadings of one factor seem arbitrary and you find the factor in fact has almost no variance, then you can conclude that the indicators seem to have no reliable common factor).
Cheers,
Timo
Mai 24, 2019 um 10:44 am Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #894adminAdministratorHello Timo,
thank you for answering so quickly!
Clicking ALT+1, ALT+2 etc. doesn’t really change the model, in fact, it doesn’t change any parameters, which makes me think that it’s not working. Also when I select „Show best LS estimate“ nothing changes. Is there a way to click through the estimates manually?I fixed the variance of the latent variables instead of one loading, and z-transformed all observed variables, too.
What would it mean for my sample if Onyx tells me it’s overspecified only based on the sample?Cheers,
ElisabethMai 24, 2019 um 10:11 am Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #893adminAdministratorHi Rayne,
now this is great-looking model 🙂 !
I’ve run it with simulated data, that worked without overspecification, so there doesn’t seem to be anything conceptually wrong. Your data may create an empirical overspecification, or it could be that your data runs into a situation which is so close to oversspecification that the numerical test misstakes it as such. As long as you get only one solution or the solutions are virtually identical (you can switch between solutions by clicking ALT+1, ALT+2, and so on; be careful only to compare ML-solutions, you will also be shown LS (=Least Squares) solutions, which necessarily will be different), you’re good.
If not, there are two tricks to avoid empirical overspecification situations: The first is to normalize the data (which seems okay here since you are not interested in means). For this, just right-click on an observed variable (or select multiple and do the steps on one of them to reduce the work) and choose „Apply z-transform“ in the context menu. This may solve your problem already, and it may also make effects more visible.
The second trick is to do the analogous thing on the latents by fixing all factor variances to one instead of fixing one of the loadings to one.
Let me know if this worked! If not, if you can send me an anonymized version of your data set, I can play around with it.
BTW, 150 participants are usually fully enough and fairly impressive for a Bachelor thesis!
Cheers,
Timo
Mai 23, 2019 um 11:12 am Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #892adminAdministratorHello again,
I ran into the „Model is overspecified“ problem again once I added a few paths to allow covariance between certain manifest variables. Notably, the problem only occurs once I connect my data with the model! My sample size is too small to draw definite conclusions (n = 150), but as it is my bachelor thesis, this shouldn’t be too big of a problem, it’s more exploratory. Could this be causing the problem?
https://www.dropbox.com/s/14rxfvuhok0u8cw/SSQ_SEM.xml?dl=0 This is the .xml code of my model.
Cheers,
RayneMai 21, 2019 um 4:14 pm Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #891adminAdministratorHello Timo,
thank you so much for your response! I played around with the model and I seem to have fixed the problem. Maybe someone who stumbles across this forum has the same problem, so I can say what I did:
1) I made sure that every measurement model / every factor had one loading fixed to 1.
2) I made sure that every latent variable has a residual (this is what caused this particular problem).
3) I also made sure, if I made a model formative (the arrows point from the manifest variables to the latent variable, not vice versa) that I deleted the residuals on the manifest variables.Thank you again for your work and I hope this can help someone who’s also new in SEM!
Cheers,
RayneMai 16, 2019 um 1:08 pm Uhr als Antwort auf: "Model is overspecified" despite enough observed statistics #889adminAdministratorHi Rayne,
welcome to the Onyx community, it’s great to have you!
The overspecification-test works numerically, so with complex models (and yours seems to be in view of 435 observed statistics :), it does happen that the warning is really nothing more but a warning, and the model is absolutely find. However, it can also happen that the model is empirically overspecified. Could you maybe send me the .xml file of the saved model? Then I could simulate some data and check whether I see where an overspecification may be burried.Cheers,
Timo
adminAdministratorHi Phettakua,
Do you have a windows machine? And have you tried downloading the onyx.jar and then double-clicking it? If you didn’t find the downloaded file, your browser may over a list of downloaded files where you can choose to run it from. Or did you start the file and it gave an error message?
Cheers,
Timo
adminAdministratorI have installed Java already but still can not run for it, what should I do for this? thank u
adminAdministratorHi Phettakua,
You need java to run Onyx, which you can download at https://java.com/de/download/ if you don’t have it installed (many system have by default). Once java is installed, you can just download the onyx.jar file and start it with java (on windows systems, doubleclick on a .jar should automatically start it with java). There is no installation necessary.
Cheers,
Timo
adminAdministratorHi Wigner,
welcome to the community!
Onyx handles missing data by using Full Information Maximum Likelihood. When you have missingness in your data which is independent of the actual values or independent after controlling for the available measures (in the literature, for odd reasons, these two cases are called MCAR = Missing Completely at Random and MAR = Missing at Random, respectively), then the FIML estimate is unbiased, and all you loose is the power from the missing values. Imputation methods (with some exception for multiple imputation) will also have no bias for MCAR, but are biased for MAR cases. If you are interested, you can get the point estimates for likelihood based imputation from Onyx by clicking the model and selecting „Estimation“ -> „Obtain Latent / Missing Scores“; this will create a new dataset which contains your original data set, but all missing values will be imputed by the maximum likelihood best guess for this value (and, additionally, all latent variables will be contained with their respective scores for all participants).
Hope that helps, cheers,
Timo
September 23, 2018 um 2:33 pm Uhr als Antwort auf: Second-Order Multiple-Group Latent Curve Modeling #846adminAdministratorHi Kappers,
great you’re using Onyx!
In the model figure you sent, there is just one variable (the eta) which are measured by two indicators, however, if I read your text correctly, you have two independent measures which don’t combine to a common factor score, is that correct? Or do you want to measure „attitudes“ as a common factor?
If not, I would first set up the model as it is given in your representation with both variables separately, and then use the other variable (at pretest) to predict the change by a regression path. For example, you could set up the model for explicit attitudes, and then add the implicit attitudes at pretest as a predictor variable and connect it to the eta_2, the latent change, to see how strong implicit attitudes predict the change in explicit attitudes. For the other direction, you could then set up the model with implicit and explicit attitudes exchanged. This is just a very quick suggestion from what I understood, I hope it helps a little.
Cheers,
Timo
September 17, 2018 um 9:21 am Uhr als Antwort auf: Second-Order Multiple-Group Latent Curve Modeling #845adminAdministrator
Please click on this post to open a graphic representation of SO-MG-LCM.Figure 1. Second Order Latent Curve Models with parallel indicators (i.e., residual variances of observed indicators are equal within the same latent variable: ε1 within η1and ε2 within η2). All the intercepts of the observed indicators (Y) and endogenous latent variables (η) are fixed to 0 (not reported in figure). In model A, the residual variances of η1 and η2 (ζ1 and ζ2, respectively) are freely estimated, whereas in Model B they are fixed to 0. ξ1, intercept; ξ2, slope; κ1, mean of intercept; κ2, mean of slope; ϕ1, variance of intercept; ϕ2, variance of slope; ϕ12, covariance between intercept and slope; η1, latent variable at T1; η2, latent variable at T2; Y, observed indicator of η; ε, residual variance/covariance of observed indicators.
adminAdministrator1. The values on the paths are usually unstandardized values. To show standardized values, please right-click on the path of interest and choose „show standardized estimates“. They will be displayed next to your unstandardized estimates on the path.
2. This really depends. I would need to have more information.
3. I am not sure what you mean. Could you share your model or a screenshot?
Thanks for using Onyx,
Andreas -
AutorBeiträge