From this table we can see that most items have some correlation with each other ranging from \(r=-0.382\) for Items 3 and 7 to \(r=.514\) for Items 6 and 7. Due to relatively high correlations among items, this would be a good candidate for factor analysis. Recall that the goal of factor analysis is to model the interrelationships between items with fewer (latent) variables. These interrelationships can be broken up into multiple components
Since the goal of factor analysis is to model the interrelationships among items, we focus primarily on the variance and covariance rather than the mean. Factor analysis assumes that variance can be partitioned into two types of variance, common and unique
The figure below shows how these concepts are related:
As a data analyst, the goal of a factor analysis is to reduce the number of variables to explain and to interpret the results. This can be accomplished in two steps:
Factor extraction involves making a choice about the type of model as well the number of factors to extract. Factor rotation comes after the factors are extracted, with the goal of achieving simple structure in order to improve interpretability.
There are two approaches to factor extraction which stems from different approaches to variance partitioning: a) principal components analysis and b) common factor analysis.
Unlike factor analysis, principal components analysis or PCA makes the assumption that there is no unique variance, the total variance is equal to common variance. Recall that variance can be partitioned into common and unique variance. If there is no unique variance then common variance takes up total variance (see figure below). Additionally, if the total variance is 1, then the common variance is equal to the communality.
The goal of a PCA is to replicate the correlation matrix using a set of components that are fewer in number and linear combinations of the original set of items. Although the following analysis defeats the purpose of doing a PCA we will begin by extracting as many components as possible as a teaching exercise and so that we can decide on the optimal number of components to extract later.
First go to Analyze – Dimension Reduction – Factor. Move all the observed variables over the Variables: box to be analyze.
Under Extraction – Method, pick Principal components and make sure to Analyze the Correlation matrix. We also request the Unrotated factor solution and the Scree plot. Under Extract, choose Fixed number of factors, and under Factor to extract enter 8. We also bumped up the Maximum Iterations of Convergence to 100.
The equivalent SPSS syntax is shown below:
Before we get into the SPSS output, let’s understand a few things about eigenvalues and eigenvectors.
Eigenvalues represent the total amount of variance that can be explained by a given principal component. They can be positive or negative in theory, but in practice they explain variance which is always positive.
Eigenvalues are also the sum of squared component loadings across all items for each component, which represent the amount of variance in each item that can be explained by the principal component.
Eigenvectors represent a weight for each eigenvalue. The eigenvector times the square root of the eigenvalue gives the component loadings which can be interpreted as the correlation of each item with the principal component. For this particular PCA of the SAQ-8, the eigenvector associated with Item 1 on the first component is \(0.377\), and the eigenvalue of Item 1 is \(3.057\). We can calculate the first component as
$$(0.377)\sqrt{3.057}= 0.659.$$
In this case, we can say that the correlation of the first item with the first component is \(0.659\). Let’s now move on to the component matrix.
The components can be interpreted as the correlation of each item with the component. Each item has a loading corresponding to each of the 8 components. For example, Item 1 is correlated \(0.659\) with the first component, \(0.136\) with the second component and \(-0.398\) with the third, and so on.
The square of each loading represents the proportion of variance (think of it as an \(R^2\) statistic) explained by a particular component. For Item 1, \((0.659)^2=0.434\) or \(43.4\%\) of its variance is explained by the first component. Subsequently, \((0.136)^2 = 0.018\) or \(1.8\%\) of the variance in Item 1 is explained by the second component. The total variance explained by both components is thus \(43.4\%+1.8\%=45.2\%\). If you keep going on adding the squared loadings cumulatively down the components, you find that it sums to 1 or 100%. This is also known as the communality , and in a PCA the communality for each item is equal to the total variance.
Component Matrix | ||||||||
Item | Component | |||||||
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
1 | 0.659 | 0.136 | -0.398 | 0.160 | -0.064 | 0.568 | -0.177 | 0.068 |
2 | -0.300 | 0.866 | -0.025 | 0.092 | -0.290 | -0.170 | -0.193 | -0.001 |
3 | -0.653 | 0.409 | 0.081 | 0.064 | 0.410 | 0.254 | 0.378 | 0.142 |
4 | 0.720 | 0.119 | -0.192 | 0.064 | -0.288 | -0.089 | 0.563 | -0.137 |
5 | 0.650 | 0.096 | -0.215 | 0.460 | 0.443 | -0.326 | -0.092 | -0.010 |
6 | 0.572 | 0.185 | 0.675 | 0.031 | 0.107 | 0.176 | -0.058 | -0.369 |
7 | 0.718 | 0.044 | 0.453 | -0.006 | -0.090 | -0.051 | 0.025 | 0.516 |
8 | 0.568 | 0.267 | -0.221 | -0.694 | 0.258 | -0.084 | -0.043 | -0.012 |
Extraction Method: Principal Component Analysis. | ||||||||
a. 8 components extracted. |
Summing the squared component loadings across the components (columns) gives you the communality estimates for each item, and summing each squared loading down the items (rows) gives you the eigenvalue for each component. For example, to obtain the first eigenvalue we calculate:
$$(0.659)^2 + (-.300)^2 – (-0.653)^2 + (0.720)^2 + (0.650)^2 + (0.572)^2 + (0.718)^2 + (0.568)^2 = 3.057$$
You will get eight eigenvalues for eight components, which leads us to the next table.
Total Variance Explained in the 8-component PCA
Recall that the eigenvalue represents the total amount of variance that can be explained by a given principal component. Starting from the first component, each subsequent component is obtained from partialling out the previous component. Therefore the first component explains the most variance, and the last component explains the least. Looking at the Total Variance Explained table, you will get the total variance explained by each component. For example, Component 1 is \(3.057\), or \((3.057/8)\% = 38.21\%\) of the total variance. Because we extracted the same number of components as the number of items, the Initial Eigenvalues column is the same as the Extraction Sums of Squared Loadings column.
Total Variance Explained | ||||||
Component | Initial Eigenvalues | Extraction Sums of Squared Loadings | ||||
Total | % of Variance | Cumulative % | Total | % of Variance | Cumulative % | |
1 | 3.057 | 38.206 | 38.206 | 3.057 | 38.206 | 38.206 |
2 | 1.067 | 13.336 | 51.543 | 1.067 | 13.336 | 51.543 |
3 | 0.958 | 11.980 | 63.523 | 0.958 | 11.980 | 63.523 |
4 | 0.736 | 9.205 | 72.728 | 0.736 | 9.205 | 72.728 |
5 | 0.622 | 7.770 | 80.498 | 0.622 | 7.770 | 80.498 |
6 | 0.571 | 7.135 | 87.632 | 0.571 | 7.135 | 87.632 |
7 | 0.543 | 6.788 | 94.420 | 0.543 | 6.788 | 94.420 |
8 | 0.446 | 5.580 | 100.000 | 0.446 | 5.580 | 100.000 |
Extraction Method: Principal Component Analysis. |
Since the goal of running a PCA is to reduce our set of variables down, it would useful to have a criterion for selecting the optimal number of components that are of course smaller than the total number of items. One criterion is the choose components that have eigenvalues greater than 1. Under the Total Variance Explained table, we see the first two components have an eigenvalue greater than 1. This can be confirmed by the Scree Plot which plots the eigenvalue (total variance explained) by the component number. Recall that we checked the Scree Plot option under Extraction – Display, so the scree plot should be produced automatically.
The first component will always have the highest total variance and the last component will always have the least, but where do we see the largest drop? If you look at Component 2, you will see an “elbow” joint. This is the marking point where it’s perhaps not too beneficial to continue further component extraction. There are some conflicting definitions of the interpretation of the scree plot but some say to take the number of components to the left of the the “elbow”. Following this criteria we would pick only one component. A more subjective interpretation of the scree plots suggests that any number of components between 1 and 4 would be plausible and further corroborative evidence would be helpful.
Some criteria say that the total variance explained by all components should be between 70% to 80% variance, which in this case would mean about four to five components. The authors of the book say that this may be untenable for social science research where extracted factors usually explain only 50% to 60%. Picking the number of components is a bit of an art and requires input from the whole research team. Let’s suppose we talked to the principal investigator and she believes that the two component solution makes sense for the study, so we will proceed with the analysis.
Running the two component PCA is just as easy as running the 8 component solution. The only difference is under Fixed number of factors – Factors to extract you enter 2.
We will focus the differences in the output between the eight and two-component solution. Under Total Variance Explained, we see that the Initial Eigenvalues no longer equals the Extraction Sums of Squared Loadings. The main difference is that there are only two rows of eigenvalues, and the cumulative percent variance goes up to \(51.54\%\).
Total Variance Explained | ||||||
Component | Initial Eigenvalues | Extraction Sums of Squared Loadings | ||||
Total | % of Variance | Cumulative % | Total | % of Variance | Cumulative % | |
1 | 3.057 | 38.206 | 38.206 | 3.057 | 38.206 | 38.206 |
2 | 1.067 | 13.336 | 51.543 | 1.067 | 13.336 | 51.543 |
3 | 0.958 | 11.980 | 63.523 | |||
4 | 0.736 | 9.205 | 72.728 | |||
5 | 0.622 | 7.770 | 80.498 | |||
6 | 0.571 | 7.135 | 87.632 | |||
7 | 0.543 | 6.788 | 94.420 | |||
8 | 0.446 | 5.580 | 100.000 | |||
Extraction Method: Principal Component Analysis. |
Similarly, you will see that the Component Matrix has the same loadings as the eight-component solution but instead of eight columns it’s now two columns.
Component Matrix | ||
Item | Component | |
1 | 2 | |
1 | 0.659 | 0.136 |
2 | -0.300 | 0.866 |
3 | -0.653 | 0.409 |
4 | 0.720 | 0.119 |
5 | 0.650 | 0.096 |
6 | 0.572 | 0.185 |
7 | 0.718 | 0.044 |
8 | 0.568 | 0.267 |
Extraction Method: Principal Component Analysis. | ||
a. 2 components extracted. |
Again, we interpret Item 1 as having a correlation of 0.659 with Component 1. From glancing at the solution, we see that Item 4 has the highest correlation with Component 1 and Item 2 the lowest. Similarly, we see that Item 2 has the highest correlation with Component 2 and Item 7 the lowest.
True or False
1.T, 2.F (sum of squared loadings), 3. T
The communality is the sum of the squared component loadings up to the number of components you extract. In the SPSS output you will see a table of communalities.
Communalities | ||
Initial | Extraction | |
1 | 1.000 | 0.453 |
2 | 1.000 | 0.840 |
3 | 1.000 | 0.594 |
4 | 1.000 | 0.532 |
5 | 1.000 | 0.431 |
6 | 1.000 | 0.361 |
7 | 1.000 | 0.517 |
8 | 1.000 | 0.394 |
Extraction Method: Principal Component Analysis. |
Since PCA is an iterative estimation process, it starts with 1 as an initial estimate of the communality (since this is the total variance across all 8 components), and then proceeds with the analysis until a final communality extracted. Notice that the Extraction column is smaller Initial column because we only extracted two components. As an exercise, let’s manually calculate the first communality from the Component Matrix. The first ordered pair is \((0.659,0.136)\) which represents the correlation of the first item with Component 1 and Component 2. Recall that squaring the loadings and summing down the components (columns) gives us the communality:
$$h^2_1 = (0.659)^2 + (0.136)^2 = 0.453$$
Going back to the Communalities table, if you sum down all 8 items (rows) of the Extraction column, you get \(4.123\). If you go back to the Total Variance Explained table and summed the first two eigenvalues you also get \(3.057+1.067=4.124\). Is that surprising? Basically it’s saying that the summing the communalities across all items is the same as summing the eigenvalues across all components.
1. In a PCA, when would the communality for the Initial column be equal to the Extraction column?
Answer : When you run an 8-component PCA.
1. F, the eigenvalue is the total communality across all items for a single component, 2. T, 3. T, 4. F (you can only sum communalities across items, and sum eigenvalues across components, but if you do that they are equal).
The partitioning of variance differentiates a principal components analysis from what we call common factor analysis. Both methods try to reduce the dimensionality of the dataset down to fewer unobserved variables, but whereas PCA assumes that there common variances takes up all of total variance, common factor analysis assumes that total variance can be partitioned into common and unique variance. It is usually more reasonable to assume that you have not measured your set of items perfectly. The unobserved or latent variable that makes up common variance is called a factor , hence the name factor analysis. The other main difference between PCA and factor analysis lies in the goal of your analysis. If your goal is to simply reduce your variable list down into a linear combination of smaller components then PCA is the way to go. However, if you believe there is some latent construct that defines the interrelationship among items, then factor analysis may be more appropriate. In this case, we assume that there is a construct called SPSS Anxiety that explains why you see a correlation among all the items on the SAQ-8, we acknowledge however that SPSS Anxiety cannot explain all the shared variance among items in the SAQ, so we model the unique variance as well. Based on the results of the PCA, we will start with a two factor extraction.
To run a factor analysis, use the same steps as running a PCA (Analyze – Dimension Reduction – Factor) except under Method choose Principal axis factoring. Note that we continue to set Maximum Iterations for Convergence at 100 and we will see why later.
Pasting the syntax into the SPSS Syntax Editor we get:
Note the main difference is under /EXTRACTION we list PAF for Principal Axis Factoring instead of PC for Principal Components. We will get three tables of output, Communalities, Total Variance Explained and Factor Matrix. Let’s go over each of these and compare them to the PCA output.
Communalities | ||
Item | Initial | Extraction |
1 | 0.293 | 0.437 |
2 | 0.106 | 0.052 |
3 | 0.298 | 0.319 |
4 | 0.344 | 0.460 |
5 | 0.263 | 0.344 |
6 | 0.277 | 0.309 |
7 | 0.393 | 0.851 |
8 | 0.192 | 0.236 |
Extraction Method: Principal Axis Factoring. |
The most striking difference between this communalities table and the one from the PCA is that the initial extraction is no longer one. Recall that for a PCA, we assume the total variance is completely taken up by the common variance or communality, and therefore we pick 1 as our best initial guess. What principal axis factoring does is instead of guessing 1 as the initial communality, it chooses the squared multiple correlation coefficient \(R^2\). To see this in action for Item 1 run a linear regression where Item 1 is the dependent variable and Items 2 -8 are independent variables. Go to Analyze – Regression – Linear and enter q01 under Dependent and q02 to q08 under Independent(s).
Pasting the syntax into the Syntax Editor gives us:
The output we obtain from this analysis is
Model Summary | ||||
Model | R | R Square | Adjusted R Square | Std. Error of the Estimate |
1 | .541 | 0.293 | 0.291 | 0.697 |
Note that 0.293 (highlighted in red) matches the initial communality estimate for Item 1. We can do eight more linear regressions in order to get all eight communality estimates but SPSS already does that for us. Like PCA, factor analysis also uses an iterative estimation process to obtain the final estimates under the Extraction column. Finally, summing all the rows of the extraction column, and we get 3.00. This represents the total common variance shared among all items for a two factor solution.
The next table we will look at is Total Variance Explained. Comparing this to the table from the PCA we notice that the Initial Eigenvalues are exactly the same and includes 8 rows for each “factor”. In fact, SPSS simply borrows the information from the PCA analysis for use in the factor analysis and the factors are actually components in the Initial Eigenvalues column. The main difference now is in the Extraction Sums of Squares Loadings. We notice that each corresponding row in the Extraction column is lower than the Initial column. This is expected because we assume that total variance can be partitioned into common and unique variance, which means the common variance explained will be lower. Factor 1 explains 31.38% of the variance whereas Factor 2 explains 6.24% of the variance. Just as in PCA the more factors you extract, the less variance explained by each successive factor.
Total Variance Explained | ||||||
Factor | Initial Eigenvalues | Extraction Sums of Squared Loadings | ||||
Total | % of Variance | Cumulative % | Total | % of Variance | Cumulative % | |
1 | 3.057 | 38.206 | 38.206 | 2.511 | 31.382 | 31.382 |
2 | 1.067 | 13.336 | 51.543 | 0.499 | 6.238 | 37.621 |
3 | 0.958 | 11.980 | 63.523 | |||
4 | 0.736 | 9.205 | 72.728 | |||
5 | 0.622 | 7.770 | 80.498 | |||
6 | 0.571 | 7.135 | 87.632 | |||
7 | 0.543 | 6.788 | 94.420 | |||
8 | 0.446 | 5.580 | 100.000 | |||
Extraction Method: Principal Axis Factoring. |
A subtle note that may be easily overlooked is that when SPSS plots the scree plot or the Eigenvalues greater than 1 criteria (Analyze – Dimension Reduction – Factor – Extraction), it bases it off the Initial and not the Extraction solution. This is important because the criteria here assumes no unique variance as in PCA, which means that this is the total variance explained not accounting for specific or measurement error. Note that in the Extraction of Sums Squared Loadings column the second factor has an eigenvalue that is less than 1 but is still retained because the Initial value is 1.067. If you want to use this criteria for the common variance explained you would need to modify the criteria yourself.
Answers: 1. When there is no unique variance (PCA assumes this whereas common factor analysis does not, so this is in theory and not in practice), 2. F, it uses the initial PCA solution and the eigenvalues assume no unique variance.
Factor Matrix | ||
Item | Factor | |
1 | 2 | |
1 | 0.588 | -0.303 |
2 | -0.227 | 0.020 |
3 | -0.557 | 0.094 |
4 | 0.652 | -0.189 |
5 | 0.560 | -0.174 |
6 | 0.498 | 0.247 |
7 | 0.771 | 0.506 |
8 | 0.470 | -0.124 |
Extraction Method: Principal Axis Factoring. | ||
a. 2 factors extracted. 79 iterations required. |
First note the annotation that 79 iterations were required. If we had simply used the default 25 iterations in SPSS, we would not have obtained an optimal solution. This is why in practice it’s always good to increase the maximum number of iterations. Now let’s get into the table itself. The elements of the Factor Matrix table are called loadings and represent the correlation of each item with the corresponding factor. Just as in PCA, squaring each loading and summing down the items (rows) gives the total variance explained by each factor. Note that they are no longer called eigenvalues as in PCA. Let’s calculate this for Factor 1:
$$(0.588)^2 + (-0.227)^2 + (-0.557)^2 + (0.652)^2 + (0.560)^2 + (0.498)^2 + (0.771)^2 + (0.470)^2 = 2.51$$
This number matches the first row under the Extraction column of the Total Variance Explained table. We can repeat this for Factor 2 and get matching results for the second row. Additionally, we can get the communality estimates by summing the squared loadings across the factors (columns) for each item. For example, for Item 1:
$$(0.588)^2 + (-0.303)^2 = 0.437$$
Note that these results match the value of the Communalities table for Item 1 under the Extraction column. This means that the sum of squared loadings across factors represents the communality estimates for each item.
To see the relationships among the three tables let’s first start from the Factor Matrix (or Component Matrix in PCA). We will use the term factor to represent components in PCA as well. These elements represent the correlation of the item with each factor. Now, square each element to obtain squared loadings or the proportion of variance explained by each factor for each item. Summing the squared loadings across factors you get the proportion of variance explained by all factors in the model. This is known as common variance or communality, hence the result is the Communalities table. Going back to the Factor Matrix, if you square the loadings and sum down the items you get Sums of Squared Loadings (in PAF) or eigenvalues (in PCA) for each factor. These now become elements of the Total Variance Explained table. Summing down the rows (i.e., summing down the factors) under the Extraction column we get \(2.511 + 0.499 = 3.01\) or the total (common) variance explained. In words, this is the total (common) variance explained by the two factor solution for all eight items. Equivalently, since the Communalities table represents the total common variance explained by both factors for each item, summing down the items in the Communalities table also gives you the total (common) variance explained, in this case
$$ (0.437)^2 + (0.052)^2 + (0.319)^2 + (0.460)^2 + (0.344)^2 + (0.309)^2 + (0.851)^2 + (0.236)^2 = 3.01$$
which is the same result we obtained from the Total Variance Explained table. Here is a table that that may help clarify what we’ve talked about:
In summary:
True or False (the following assumes a two-factor Principal Axis Factor solution with 8 items)
Answers: 1. T, 2. F, the sum of the squared elements across both factors, 3. T, 4. T, 5. F, sum all eigenvalues from the Extraction column of the Total Variance Explained table, 6. F, the total Sums of Squared Loadings represents only the total common variance excluding unique variance, 7. F, eigenvalues are only applicable for PCA.
Since this is a non-technical introduction to factor analysis, we won’t go into detail about the differences between Principal Axis Factoring (PAF) and Maximum Likelihood (ML). The main concept to know is that ML also assumes a common factor analysis using the \(R^2\) to obtain initial estimates of the communalities, but uses a different iterative process to obtain the extraction solution. To run a factor analysis using maximum likelihood estimation under Analyze – Dimension Reduction – Factor – Extraction – Method choose Maximum Likelihood.
Although the initial communalities are the same between PAF and ML, the final extraction loadings will be different, which means you will have different Communalities, Total Variance Explained, and Factor Matrix tables (although Initial columns will overlap). The other main difference is that you will obtain a Goodness-of-fit Test table, which gives you a absolute test of model fit. Non-significant values suggest a good fitting model. Here the p -value is less than 0.05 so we reject the two-factor model.
Goodness-of-fit Test | ||
Chi-Square | df | Sig. |
198.617 | 13 | 0.000 |
In practice, you would obtain chi-square values for multiple factor analysis runs, which we tabulate below from 1 to 8 factors. The table shows the number of factors extracted (or attempted to extract) as well as the chi-square, degrees of freedom, p-value and iterations needed to converge. Note that as you increase the number of factors, the chi-square value and degrees of freedom decreases but the iterations needed and p-value increases. Practically, you want to make sure the number of iterations you specify exceeds the iterations needed. Additionally, NS means no solution and N/A means not applicable. In SPSS, no solution is obtained when you run 5 to 7 factors because the degrees of freedom is negative (which cannot happen). For the eight factor solution, it is not even applicable in SPSS because it will spew out a warning that “You cannot request as many factors as variables with any extraction method except PC. The number of factors will be reduced by one.” This means that if you try to extract an eight factor solution for the SAQ-8, it will default back to the 7 factor solution. Now that we understand the table, let’s see if we can find the threshold at which the absolute fit indicates a good fitting model. It looks like here that the p -value becomes non-significant at a 3 factor solution. Note that differs from the eigenvalues greater than 1 criteria which chose 2 factors and using Percent of Variance explained you would choose 4-5 factors. We talk to the Principal Investigator and at this point, we still prefer the two-factor solution. Note that there is no “right” answer in picking the best factor model, only what makes sense for your theory. We will talk about interpreting the factor loadings when we talk about factor rotation to further guide us in choosing the correct number of factors.
Number of Factors | Chi-square | Df | -value | Iterations needed |
1 | 553.08 | 20 | <0.05 | 4 |
2 | 198.62 | 13 | < 0.05 | 39 |
3 | 13.81 | 7 | 0.055 | 57 |
4 | 1.386 | 2 | 0.5 | 168 |
5 | NS | -2 | NS | NS |
6 | NS | -5 | NS | NS |
7 | NS | -7 | NS | NS |
8 | N/A | N/A | N/A | N/A |
Answers: 1. T, 2. F, the two use the same starting communalities but a different estimation process to obtain extraction loadings, 3. F, only Maximum Likelihood gives you chi-square values, 4. F, you can extract as many components as items in PCA, but SPSS will only extract up to the total number of items minus 1, 5. F, greater than 0.05, 6. T, we are taking away degrees of freedom but extracting more factors.
As we mentioned before, the main difference between common factor analysis and principal components is that factor analysis assumes total variance can be partitioned into common and unique variance, whereas principal components assumes common variance takes up all of total variance (i.e., no unique variance). For both methods, when you assume total variance is 1, the common variance becomes the communality. The communality is unique to each item, so if you have 8 items, you will obtain 8 communalities; and it represents the common variance explained by the factors or components. However in the case of principal components, the communality is the total variance of each item, and summing all 8 communalities gives you the total variance across all items. In contrast, common factor analysis assumes that the communality is a portion of the total variance, so that summing up the communalities represents the total common variance and not the total variance. In summary, for PCA, total common variance is equal to total variance explained , which in turn is equal to the total variance, but in common factor analysis, total common variance is equal to total variance explained but does not equal total variance.
The following applies to the SAQ-8 when theoretically extracting 8 components or factors for 8 items:
Answers: 1. T, 2. F, the total variance for each item, 3. T, 4. F, communality is unique to each item (shared across components or factors), 5. T, 6. T.
After deciding on the number of factors to extract and with analysis model to use, the next step is to interpret the factor loadings. Factor rotations help us interpret factor loadings. There are two general types of rotations, orthogonal and oblique.
The goal of factor rotation is to improve the interpretability of the factor solution by reaching simple structure.
Without rotation, the first factor is the most general factor onto which most items load and explains the largest amount of variance. This may not be desired in all cases. Suppose you wanted to know how well a set of items load on each factor; simple structure helps us to achieve this.
The definition of simple structure is that in a factor loading matrix:
For every pair of factors (columns),
The following table is an example of simple structure with three factors:
Item | Factor 1 | Factor 2 | Factor 3 |
1 | 0.8 | 0 | 0 |
2 | 0.8 | 0 | 0 |
3 | 0.8 | 0 | 0 |
4 | 0 | 0.8 | 0 |
5 | 0 | 0.8 | 0 |
6 | 0 | 0.8 | 0 |
7 | 0 | 0 | 0.8 |
8 | 0 | 0 | 0.8 |
Let’s go down the checklist to criteria to see why it satisfies simple structure:
An easier criteria from Pedhazur and Schemlkin (1991) states that
For the following factor matrix, explain why it does not conform to simple structure using both the conventional and Pedhazur test.
Item | Factor 1 | Factor 2 | Factor 3 |
1 | 0.8 | 0 | 0.8 |
2 | 0.8 | 0 | 0.8 |
3 | 0.8 | 0 | 0 |
4 | 0.8 | 0 | 0 |
5 | 0 | 0.8 | 0.8 |
6 | 0 | 0.8 | 0.8 |
7 | 0 | 0.8 | 0.8 |
8 | 0 | 0.8 | 0 |
Solution: Using the conventional test, although Criteria 1 and 2 are satisfied (each row has at least one zero, each column has at least three zeroes), Criteria 3 fails because for Factors 2 and 3, only 3/8 rows have 0 on one factor and non-zero on the other. Additionally, for Factors 2 and 3, only Items 5 through 7 have non-zero loadings or 3/8 rows have non-zero coefficients (fails Criteria 4 and 5 simultaneously). Using the Pedhazur method, Items 1, 2, 5, 6, and 7 have high loadings on two factors (fails first criteria) and Factor 3 has high loadings on a majority or 5/8 items (fails second criteria).
We know that the goal of factor rotation is to rotate the factor matrix so that it can approach simple structure in order to improve interpretability. Orthogonal rotation assumes that the factors are not correlated. The benefit of doing an orthogonal rotation is that loadings are simple correlations of items with factors, and standardized solutions can estimate unique contribution of each factor. The most common type of orthogonal rotation is Varimax rotation. We will walk through how to do this in SPSS.
The steps to running a two-factor Principal Axis Factoring is the same as before (Analyze – Dimension Reduction – Factor – Extraction), except that under Rotation – Method we check Varimax. Make sure under Display to check Rotated Solution and Loading plot(s), and under Maximum Iterations for Convergence enter 100.
Pasting the syntax into the SPSS editor you obtain:
Let’s first talk about what tables are the same or different from running a PAF with no rotation. First, we know that the unrotated factor matrix (Factor Matrix table) should be the same. Additionally, since the common variance explained by both factors should be the same, the Communalities table should be the same. The main difference is that we ran a rotation, so we should get the rotated solution (Rotated Factor Matrix) as well as the transformation used to obtain the rotation (Factor Transformation Matrix). Finally, although the total variance explained by all factors stays the same, the total variance explained by each factor will be different.
Rotated Factor Matrix | ||
Factor | ||
1 | 2 | |
1 | 0.646 | 0.139 |
2 | -0.188 | -0.129 |
3 | -0.490 | -0.281 |
4 | 0.624 | 0.268 |
5 | 0.544 | 0.221 |
6 | 0.229 | 0.507 |
7 | 0.275 | 0.881 |
8 | 0.442 | 0.202 |
Extraction Method: Principal Axis Factoring. Rotation Method: Varimax with Kaiser Normalization. | ||
a. Rotation converged in 3 iterations. |
The Rotated Factor Matrix table tells us what the factor loadings look like after rotation (in this case Varimax). Kaiser normalization is a method to obtain stability of solutions across samples. After rotation, the loadings are rescaled back to the proper size. This means that equal weight is given to all items when performing the rotation. The only drawback is if the communality is low for a particular item, Kaiser normalization will weight these items equally with items with high communality. As such, Kaiser normalization is preferred when communalities are high across all items. You can turn off Kaiser normalization by specifying
Here is what the Varimax rotated loadings look like without Kaiser normalization. Compared to the rotated factor matrix with Kaiser normalization the patterns look similar if you flip Factors 1 and 2; this may be an artifact of the rescaling. Another possible reasoning for the stark differences may be due to the low communalities for Item 2 (0.052) and Item 8 (0.236). Kaiser normalization weights these items equally with the other high communality items.
Rotated Factor Matrix | ||
Factor | ||
1 | 2 | |
1 | 0.207 | 0.628 |
2 | -0.148 | -0.173 |
3 | -0.331 | -0.458 |
4 | 0.332 | 0.592 |
5 | 0.277 | 0.517 |
6 | 0.528 | 0.174 |
7 | 0.905 | 0.180 |
8 | 0.248 | 0.418 |
Extraction Method: Principal Axis Factoring. Rotation Method: Varimax without Kaiser Normalization. | ||
a. Rotation converged in 3 iterations. |
In the table above, the absolute loadings that are higher than 0.4 are highlighted in blue for Factor 1 and in red for Factor 2. We can see that Items 6 and 7 load highly onto Factor 1 and Items 1, 3, 4, 5, and 8 load highly onto Factor 2. Item 2 does not seem to load highly on any factor. Looking more closely at Item 6 “My friends are better at statistics than me” and Item 7 “Computers are useful only for playing games”, we don’t see a clear construct that defines the two. Item 2, “I don’t understand statistics” may be too general an item and isn’t captured by SPSS Anxiety. It’s debatable at this point whether to retain a two-factor or one-factor solution, at the very minimum we should see if Item 2 is a candidate for deletion.
The Factor Transformation Matrix tells us how the Factor Matrix was rotated. In SPSS, you will see a matrix with two rows and two columns because we have two factors.
Factor Transformation Matrix | ||
Factor | 1 | 2 |
1 | 0.773 | 0.635 |
2 | -0.635 | 0.773 |
Extraction Method: Principal Axis Factoring. Rotation Method: Varimax with Kaiser Normalization. |
How do we interpret this matrix? Well, we can see it as the way to move from the Factor Matrix to the Rotated Factor Matrix. From the Factor Matrix we know that the loading of Item 1 on Factor 1 is \(0.588\) and the loading of Item 1 on Factor 2 is \(-0.303\), which gives us the pair \((0.588,-0.303)\); but in the Rotated Factor Matrix the new pair is \((0.646,0.139)\). How do we obtain this new transformed pair of values? We can do what’s called matrix multiplication. The steps are essentially to start with one column of the Factor Transformation matrix, view it as another ordered pair and multiply matching ordered pairs. To get the first element, we can multiply the ordered pair in the Factor Matrix \((0.588,-0.303)\) with the matching ordered pair \((0.773,-0.635)\) in the first column of the Factor Transformation Matrix.
$$(0.588)(0.773)+(-0.303)(-0.635)=0.455+0.192=0.647.$$
To get the second element, we can multiply the ordered pair in the Factor Matrix \((0.588,-0.303)\) with the matching ordered pair \((0.773,-0.635)\) from the second column of the Factor Transformation Matrix:
$$(0.588)(0.635)+(-0.303)(0.773)=0.373-0.234=0.139.$$
Voila! We have obtained the new transformed pair with some rounding error. The figure below summarizes the steps we used to perform the transformation
The Factor Transformation Matrix can also tell us angle of rotation if we take the inverse cosine of the diagonal element. In this case, the angle of rotation is \(cos^{-1}(0.773) =39.4 ^{\circ}\). In the factor loading plot, you can see what that angle of rotation looks like, starting from \(0^{\circ}\) rotating up in a counterclockwise direction by \(39.4^{\circ}\). Notice here that the newly rotated x and y-axis are still at \(90^{\circ}\) angles from one another, hence the name orthogonal (a non-orthogonal or oblique rotation means that the new axis is no longer \(90^{\circ}\) apart. The points do not move in relation to the axis but rotate with it.
The Total Variance Explained table contains the same columns as the PAF solution with no rotation, but adds another set of columns called “Rotation Sums of Squared Loadings”. This makes sense because if our rotated Factor Matrix is different, the square of the loadings should be different, and hence the Sum of Squared loadings will be different for each factor. However, if you sum the Sums of Squared Loadings across all factors for the Rotation solution,
$$ 1.701 + 1.309 = 3.01$$
and for the unrotated solution,
$$ 2.511 + 0.499 = 3.01,$$
you will see that the two sums are the same. This is because rotation does not change the total common variance. Looking at the Rotation Sums of Squared Loadings for Factor 1, it still has the largest total variance, but now that shared variance is split more evenly.
Total Variance Explained | |||
Factor | Rotation Sums of Squared Loadings | ||
Total | % of Variance | Cumulative % | |
1 | 1.701 | 21.258 | 21.258 |
2 | 1.309 | 16.363 | 37.621 |
Extraction Method: Principal Axis Factoring. |
Varimax rotation is the most popular but one among other orthogonal rotations. The benefit of Varimax rotation is that it maximizes the variances of the loadings within the factors while maximizing differences between high and low loadings on a particular factor. Higher loadings are made higher while lower loadings are made lower. This makes Varimax rotation good for achieving simple structure but not as good for detecting an overall factor because it splits up variance of major factors among lesser ones. Quartimax may be a better choice for detecting an overall factor. It maximizes the squared loadings so that each item loads most strongly onto a single factor.
Here is the output of the Total Variance Explained table juxtaposed side-by-side for Varimax versus Quartimax rotation.
Total Variance Explained | ||
Factor | Quartimax | Varimax |
Total | Total | |
1 | 2.381 | 1.701 |
2 | 0.629 | 1.309 |
Extraction Method: Principal Axis Factoring. |
You will see that whereas Varimax distributes the variances evenly across both factors, Quartimax tries to consolidate more variance into the first factor.
Equamax is a hybrid of Varimax and Quartimax, but because of this may behave erratically and according to Pett et al. (2003), is not generally recommended.
In oblique rotation, the factors are no longer orthogonal to each other (x and y axes are not \(90^{\circ}\) angles to each other). Like orthogonal rotation, the goal is rotation of the reference axes about the origin to achieve a simpler and more meaningful factor solution compared to the unrotated solution. In oblique rotation, you will see three unique tables in the SPSS output:
Suppose the Principal Investigator hypothesizes that the two factors are correlated, and wishes to test this assumption. Let’s proceed with one of the most common types of oblique rotations in SPSS, Direct Oblimin.
The steps to running a Direct Oblimin is the same as before (Analyze – Dimension Reduction – Factor – Extraction), except that under Rotation – Method we check Direct Oblimin. The other parameter we have to put in is delta , which defaults to zero. Technically, when delta = 0, this is known as Direct Quartimin. Larger positive values for delta increases the correlation among factors. However, in general you don’t want the correlations to be too high or else there is no reason to split your factors up. In fact, SPSS caps the delta value at 0.8 (the cap for negative values is -9999). Negative delta factors may lead to orthogonal factor solutions. For the purposes of this analysis, we will leave our delta = 0 and do a Direct Quartimin analysis.
All the questions below pertain to Direct Oblimin in SPSS.
Answers: 1. T, 2. F, larger delta values, 3. F, delta leads to higher factor correlations, in general you don’t want factors to be too highly correlated
The factor pattern matrix represent partial standardized regression coefficients of each item with a particular factor. For example, \(0.740\) is the effect of Factor 1 on Item 1 controlling for Factor 2 and \(-0.137\) is the effect of Factor 2 on Item 1 controlling for Factor 1. Just as in orthogonal rotation, the square of the loadings represent the contribution of the factor to the variance of the item, but excluding the overlap between correlated factors. Factor 1 uniquely contributes \((0.740)^2=0.405=40.5\%\) of the variance in Item 1 (controlling for Factor 2 ), and Factor 2 uniquely contributes \((-0.137)^2=0.019=1.9%\) of the variance in Item 1 (controlling for Factor 1).
Pattern Matrix | ||
Factor | ||
1 | 2 | |
1 | 0.740 | -0.137 |
2 | -0.180 | -0.067 |
3 | -0.490 | -0.108 |
4 | 0.660 | 0.029 |
5 | 0.580 | 0.011 |
6 | 0.077 | 0.504 |
7 | -0.017 | 0.933 |
8 | 0.462 | 0.036 |
Extraction Method: Principal Axis Factoring. Rotation Method: Oblimin with Kaiser Normalization. | ||
a. Rotation converged in 5 iterations. |
The factor structure matrix represent the simple zero-order correlations of the items with each factor (it’s as if you ran a simple regression of a single factor on the outcome). For example, \(0.653\) is the simple correlation of Factor 1 on Item 1 and \(0.333\) is the simple correlation of Factor 2 on Item 1. The more correlated the factors, the more difference between pattern and structure matrix and the more difficult to interpret the factor loadings. From this we can see that Items 1, 3, 4, 5, and 8 load highly onto Factor 1 and Items 6, and 7 load highly onto Factor 2. Item 2 doesn’t seem to load well on either factor.
Additionally, we can look at the variance explained by each factor not controlling for the other factors. For example, Factor 1 contributes \((0.653)^2=0.426=42.6\%\) of the variance in Item 1, and Factor 2 contributes \((0.333)^2=0.11=11.0%\) of the variance in Item 1. Notice that the contribution in variance of Factor 2 is higher \(11\%\) vs. \(1.9\%\) because in the Pattern Matrix we controlled for the effect of Factor 1, whereas in the Structure Matrix we did not.
Structure Matrix | ||
Factor | ||
1 | 2 | |
1 | 0.653 | 0.333 |
2 | -0.222 | -0.181 |
3 | -0.559 | -0.420 |
4 | 0.678 | 0.449 |
5 | 0.587 | 0.380 |
6 | 0.398 | 0.553 |
7 | 0.577 | 0.923 |
8 | 0.485 | 0.330 |
Extraction Method: Principal Axis Factoring. Rotation Method: Oblimin with Kaiser Normalization. |
Recall that the more correlated the factors, the more difference between pattern and structure matrix and the more difficult to interpret the factor loadings. In our case, Factor 1 and Factor 2 are pretty highly correlated, which is why there is such a big difference between the factor pattern and factor structure matrices.
Factor Correlation Matrix | ||
Factor | 1 | 2 |
1 | 1.000 | 0.636 |
2 | 0.636 | 1.000 |
Extraction Method: Principal Axis Factoring. Rotation Method: Oblimin with Kaiser Normalization. |
The difference between an orthogonal versus oblique rotation is that the factors in an oblique rotation are correlated. This means not only must we account for the angle of axis rotation \(\theta\), we have to account for the angle of correlation \(\phi\). The angle of axis rotation is defined as the angle between the rotated and unrotated axes (blue and black axes). From the Factor Correlation Matrix, we know that the correlation is \(0.636\), so the angle of correlation is \(cos^{-1}(0.636) = 50.5^{\circ}\), which is the angle between the two rotated axes (blue x and blue y-axis). The sum of rotations \(\theta\) and \(\phi\) is the total angle rotation. We are not given the angle of axis rotation, so we only know that the total angle rotation is \(\theta + \phi = \theta + 50.5^{\circ}\).
The structure matrix is in fact a derivative of the pattern matrix. If you multiply the pattern matrix by the factor correlation matrix, you will get back the factor structure matrix. Let’s take the example of the ordered pair \((0.740,-0.137)\) from the Pattern Matrix, which represents the partial correlation of Item 1 with Factors 1 and 2 respectively. Performing matrix multiplication for the first column of the Factor Correlation Matrix we get
$$ (0.740)(1) + (-0.137)(0.636) = 0.740 – 0.087 =0.652.$$
Similarly, we multiple the ordered factor pair with the second column of the Factor Correlation Matrix to get:
$$ (0.740)(0.636) + (-0.137)(1) = 0.471 -0.137 =0.333 $$
Looking at the first row of the Structure Matrix we get \((0.653,0.333)\) which matches our calculation! This neat fact can be depicted with the following figure:
As a quick aside, suppose that the factors are orthogonal, which means that the factor correlations are 1′ s on the diagonal and zeros on the off-diagonal, a quick calculation with the ordered pair \((0.740,-0.137)\)
$$ (0.740)(1) + (-0.137)(0) = 0.740$$
and similarly,
$$ (0.740)(0) + (-0.137)(1) = -0.137$$
and you get back the same ordered pair. This is called multiplying by the identity matrix (think of it as multiplying \(2*1 = 2\)).
Answers: 1. Decrease the delta values so that the correlation between factors approaches zero. 2. T, the correlations will become more orthogonal and hence the pattern and structure matrix will be closer.
The column Extraction Sums of Squared Loadings is the same as the unrotated solution, but we have an additional column known as Rotation Sums of Squared Loadings. SPSS says itself that “when factors are correlated, sums of squared loadings cannot be added to obtain total variance”. You will note that compared to the Extraction Sums of Squared Loadings, the Rotation Sums of Squared Loadings is only slightly lower for Factor 1 but much higher for Factor 2. This is because unlike orthogonal rotation, this is no longer the unique contribution of Factor 1 and Factor 2. How do we obtain the Rotation Sums of Squared Loadings? SPSS squares the Structure Matrix and sums down the items.
Total Variance Explained | ||||
Factor | Extraction Sums of Squared Loadings | Rotation Sums of Squared Loadings | ||
Total | % of Variance | Cumulative % | Total | |
1 | 2.511 | 31.382 | 31.382 | 2.318 |
2 | 0.499 | 6.238 | 37.621 | 1.931 |
Extraction Method: Principal Axis Factoring. | ||||
a. When factors are correlated, sums of squared loadings cannot be added to obtain a total variance. |
As a demonstration, let’s obtain the loadings from the Structure Matrix for Factor 1
$$ (0.653)^2 + (-0.222)^2 + (-0.559)^2 + (0.678)^2 + (0.587)^2 + (0.398)^2 + (0.577)^2 + (0.485)^2 = 2.318.$$
Note that \(2.318\) matches the Rotation Sums of Squared Loadings for the first factor. This means that the Rotation Sums of Squared Loadings represent the non- unique contribution of each factor to total common variance, and summing these squared loadings for all factors can lead to estimates that are greater than total variance.
Finally, let’s conclude by interpreting the factors loadings more carefully. Let’s compare the Pattern Matrix and Structure Matrix tables side-by-side. First we highlight absolute loadings that are higher than 0.4 in blue for Factor 1 and in red for Factor 2. We see that the absolute loadings in the Pattern Matrix are in general higher in Factor 1 compared to the Structure Matrix and lower for Factor 2. This makes sense because the Pattern Matrix partials out the effect of the other factor. Looking at the Pattern Matrix, Items 1, 3, 4, 5, and 8 load highly on Factor 1, and Items 6 and 7 load highly on Factor 2. Looking at the Structure Matrix, Items 1, 3, 4, 5, 7 and 8 are highly loaded onto Factor 1 and Items 3, 4, and 7 load highly onto Factor 2. Item 2 doesn’t seem to load on any factor. The results of the two matrices are somewhat inconsistent but can be explained by the fact that in the Structure Matrix Items 3, 4 and 7 seem to load onto both factors evenly but not in the Pattern Matrix. For this particular analysis, it seems to make more sense to interpret the Pattern Matrix because it’s clear that Factor 1 contributes uniquely to most items in the SAQ-8 and Factor 2 contributes common variance only to two items (Items 6 and 7). There is an argument here that perhaps Item 2 can be eliminated from our survey and to consolidate the factors into one SPSS Anxiety factor. We talk to the Principal Investigator and we think it’s feasible to accept SPSS Anxiety as the single factor explaining the common variance in all the items, but we choose to remove Item 2, so that the SAQ-8 is now the SAQ-7.
Pattern Matrix | Structure Matrix | |||
Factor | Factor | |||
1 | 2 | 1 | 2 | |
1 | 0.740 | -0.137 | 0.653 | 0.333 |
2 | -0.180 | -0.067 | -0.222 | -0.181 |
3 | -0.490 | -0.108 | -0.559 | -0.420 |
4 | 0.660 | 0.029 | 0.678 | 0.449 |
5 | 0.580 | 0.011 | 0.587 | 0.380 |
6 | 0.077 | 0.504 | 0.398 | 0.553 |
7 | -0.017 | 0.933 | 0.577 | 0.923 |
8 | 0.462 | 0.036 | 0.485 | 0.330 |
Answers: 1. T, 2. F, represent the non -unique contribution (which means the total sum of squares can be greater than the total communality), 3. F, the Structure Matrix is obtained by multiplying the Pattern Matrix with the Factor Correlation Matrix, 4. T, it’s like multiplying a number by 1, you get the same number back, 5. F, this is true only for orthogonal rotations, the SPSS Communalities table in rotated factor solutions is based off of the unrotated solution, not the rotated solution.
As a special note, did we really achieve simple structure? Although rotation helps us achieve simple structure, if the interrelationships do not hold itself up to simple structure, we can only modify our model. In this case we chose to remove Item 2 from our model.
Promax rotation begins with Varimax (orthgonal) rotation, and uses Kappa to raise the power of the loadings. Promax really reduces the small loadings. Promax also runs faster than Varimax, and in our example Promax took 3 iterations while Direct Quartimin (Direct Oblimin with Delta =0) took 5 iterations.
Answers: 1. T.
Suppose the Principal Investigator is happy with the final factor analysis which was the two-factor Direct Quartimin solution. She has a hypothesis that SPSS Anxiety and Attribution Bias predict student scores on an introductory statistics course, so would like to use the factor scores as a predictor in this new regression analysis. Since a factor is by nature unobserved, we need to first predict or generate plausible factor scores. In SPSS, there are three methods to factor score generation, Regression, Bartlett, and Anderson-Rubin.
In order to generate factor scores, run the same factor analysis model but click on Factor Scores (Analyze – Dimension Reduction – Factor – Factor Scores). Then check Save as variables, pick the Method and optionally check Display factor score coefficient matrix.
The code pasted in the SPSS Syntax Editor looksl like this:
Here we picked the Regression approach after fitting our two-factor Direct Quartimin solution. After generating the factor scores, SPSS will add two extra variables to the end of your variable list, which you can view via Data View. The figure below shows what this looks like for the first 5 participants, which SPSS calls FAC1_1 and FAC2_1 for the first and second factors. These are now ready to be entered in another analysis as predictors.
For those who want to understand how the scores are generated, we can refer to the Factor Score Coefficient Matrix. These are essentially the regression weights that SPSS uses to generate the scores. We know that the ordered pair of scores for the first participant is \(-0.880, -0.113\). We also know that the 8 scores for the first participant are \(2, 1, 4, 2, 2, 2, 3, 1\). However, what SPSS uses is actually the standardized scores, which can be easily obtained in SPSS by using Analyze – Descriptive Statistics – Descriptives – Save standardized values as variables. The standardized scores obtained are: \(-0.452, -0.733, 1.32, -0.829, -0.749, -0.2025, 0.069, -1.42\). Using the Factor Score Coefficient matrix, we multiply the participant scores by the coefficient matrix for each column. For the first factor:
$$ \begin{eqnarray} &(0.284) (-0.452) + (-0.048)-0.733) + (-0.171)(1.32) + (0.274)(-0.829) \\ &+ (0.197)(-0.749) +(0.048)(-0.2025) + (0.174) (0.069) + (0.133)(-1.42) \\ &= -0.880, \end{eqnarray} $$
which matches FAC1_1 for the first participant. You can continue this same procedure for the second factor to obtain FAC2_1.
Factor Score Coefficient Matrix | ||
Item | Factor | |
1 | 2 | |
1 | 0.284 | 0.005 |
2 | -0.048 | -0.019 |
3 | -0.171 | -0.045 |
4 | 0.274 | 0.045 |
5 | 0.197 | 0.036 |
6 | 0.048 | 0.095 |
7 | 0.174 | 0.814 |
8 | 0.133 | 0.028 |
Extraction Method: Principal Axis Factoring. Rotation Method: Oblimin with Kaiser Normalization. Factor Scores Method: Regression. |
The second table is the Factor Score Covariance Matrix,
Factor Score Covariance Matrix | ||
Factor | 1 | 2 |
1 | 1.897 | 1.895 |
2 | 1.895 | 1.990 |
Extraction Method: Principal Axis Factoring. Rotation Method: Oblimin with Kaiser Normalization. Factor Scores Method: Regression. |
This table can be interpreted as the covariance matrix of the factor scores, however it would only be equal to the raw covariance if the factors are orthogonal. For example, if we obtained the raw covariance matrix of the factor scores we would get
Correlations | |||
FAC1_1 | FAC1_2 | ||
FAC1_1 | Covariance | 0.777 | 0.604 |
FAC1_2 | Covariance | 0.604 | 0.870 |
You will notice that these values are much lower. Let’s compare the same two tables but for Varimax rotation:
Factor Score Covariance Matrix | ||
Factor | 1 | 2 |
1 | 0.670 | 0.131 |
2 | 0.131 | 0.805 |
Extraction Method: Principal Axis Factoring. Rotation Method: Varimax with Kaiser Normalization. Factor Scores Method: Regression. |
If you compare these elements to the Covariance table below, you will notice they are the same.
Correlations | |||
FAC1_1 | FAC1_2 | ||
FAC1_1 | Covariance | 0.670 | 0.131 |
FAC1_2 | Covariance | 0.131 | 0.805 |
Note with the Bartlett and Anderson-Rubin methods you will not obtain the Factor Score Covariance matrix.
Among the three methods, each has its pluses and minuses. The regression method maximizes the correlation (and hence validity) between the factor scores and the underlying factor but the scores can be somewhat biased. This means even if you have an orthogonal solution, you can still have correlated factor scores. For Bartlett’s method, the factor scores highly correlate with its own factor and not with others, and they are an unbiased estimate of the true factor score. Unbiased scores means that with repeated sampling of the factor scores, the average of the scores is equal to the average of the true factor score. The Anderson-Rubin method perfectly scales the factor scores so that the factor scores are uncorrelated with other factors and uncorrelated with other factor scores . Since Anderson-Rubin scores impose a correlation of zero between factor scores, it is not the best option to choose for oblique rotations. Additionally, Anderson-Rubin scores are biased.
In summary, if you do an orthogonal rotation, you can pick any of the the three methods. For orthogonal rotations, use Bartlett if you want unbiased scores, use the regression method if you want to maximize validity and use Anderson-Rubin if you want the factor scores themselves to be uncorrelated with other factor scores. If you do oblique rotations, it’s preferable to stick with the Regression method. Do not use Anderson-Rubin for oblique rotations.
Answers: 1. T, 2. T, 3. T
Your Name (required)
Your Email (must be a valid email for us to receive the report!)
Comment/Error Report (required)
How to cite this page
Home » Factor Analysis – Steps, Methods and Examples
Table of Contents
Definition:
Factor analysis is a statistical technique that is used to identify the underlying structure of a relatively large set of variables and to explain these variables in terms of a smaller number of common underlying factors. It helps to investigate the latent relationships between observed variables.
Here are the general steps involved in conducting a factor analysis:
1. Define the Research Objective:
Clearly specify the purpose of the factor analysis. Determine what you aim to achieve or understand through the analysis.
2. Data Collection:
Gather the data on the variables of interest. These variables should be measurable and related to the research objective. Ensure that you have a sufficient sample size for reliable results.
3. Assess Data Suitability:
Examine the suitability of the data for factor analysis. Check for the following aspects:
4. Determine the Factor Analysis Technique:
There are different types of factor analysis techniques available, such as exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Choose the appropriate technique based on your research objective and the nature of the data.
5. Perform Factor Analysis:
a. Exploratory Factor Analysis (EFA):
b. Confirmatory Factor Analysis (CFA):
6. Interpret and Validate the Factors:
Once you have identified the factors, interpret them based on the factor loadings, theoretical understanding, and research objectives. Validate the factors by examining their relationships with external criteria or by conducting further analyses if necessary.
Types of Factor Analysis are as follows:
EFA is used to explore the underlying structure of a set of observed variables without any preconceived assumptions about the number or nature of the factors. It aims to discover the number of factors and how the observed variables are related to those factors. EFA does not impose any restrictions on the factor structure and allows for cross-loadings of variables on multiple factors.
CFA is used to test a pre-specified factor structure based on theoretical or conceptual assumptions. It aims to confirm whether the observed variables measure the latent factors as intended. CFA tests the fit of a hypothesized model and assesses how well the observed variables are associated with the expected factors. It is often used for validating measurement instruments or evaluating theoretical models.
PCA is a dimensionality reduction technique that can be considered a form of factor analysis, although it has some differences. PCA aims to explain the maximum amount of variance in the observed variables using a smaller number of uncorrelated components. Unlike traditional factor analysis, PCA does not assume that the observed variables are caused by underlying factors but focuses solely on accounting for variance.
It assumes that the observed variables are influenced by common factors and unique factors (specific to each variable). It attempts to estimate the common factor structure by extracting the shared variance among the variables while also considering the unique variance of each variable.
Hierarchical factor analysis involves multiple levels of factors. It explores both higher-order and lower-order factors, aiming to capture the complex relationships among variables. Higher-order factors are based on the relationships among lower-order factors, which are in turn based on the relationships among observed variables.
Factor Analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors.
Here are some of the essential formulas and calculations used in factor analysis:
Correlation Matrix :
The first step in factor analysis is to create a correlation matrix, which calculates the correlation coefficients between pairs of variables.
Correlation coefficient (Pearson’s r) between variables X and Y is calculated as:
r(X,Y) = Σ[(xi – x̄)(yi – ȳ)] / [n-1] σx σy
where: xi, yi are the data points, x̄, ȳ are the means of X and Y respectively, σx, σy are the standard deviations of X and Y respectively, n is the number of data points.
Extraction of Factors :
The extraction of factors from the correlation matrix is typically done by methods such as Principal Component Analysis (PCA) or other similar methods.
The formula used in PCA to calculate the principal components (factors) involves finding the eigenvalues and eigenvectors of the correlation matrix.
Let’s denote the correlation matrix as R. If λ is an eigenvalue of R, and v is the corresponding eigenvector, they satisfy the equation: Rv = λv
Factor Loadings :
Factor loadings are the correlations between the original variables and the factors. They can be calculated as the eigenvectors normalized by the square roots of their corresponding eigenvalues.
Communality and Specific Variance :
Communality of a variable is the proportion of variance in that variable explained by the factors. It can be calculated as the sum of squared factor loadings for that variable across all factors.
The specific variance of a variable is the proportion of variance in that variable not explained by the factors, and it’s calculated as 1 – Communality.
Factor Rotation : Factor rotation, such as Varimax or Promax, is used to make the output more interpretable. It doesn’t change the underlying relationships but affects the loadings of the variables on the factors.
For example, in the Varimax rotation, the objective is to minimize the variance of the squared loadings of a factor (column) on all the variables (rows) in a factor matrix, which leads to more high and low loadings, making the factor easier to interpret.
Here are some real-time examples of factor analysis:
Here’s an example of how factor analysis might be used in research:
Let’s say a psychologist is interested in the factors that contribute to overall wellbeing. They conduct a survey with 1000 participants, asking them to respond to 50 different questions relating to various aspects of their lives, including social relationships, physical health, mental health, job satisfaction, financial security, personal growth, and leisure activities.
Given the broad scope of these questions, the psychologist decides to use factor analysis to identify underlying factors that could explain the correlations among responses.
After conducting the factor analysis, the psychologist finds that the responses can be grouped into five factors:
By reducing the 50 individual questions to five underlying factors, the psychologist can more effectively analyze the data and draw conclusions about the major aspects of life that contribute to overall wellbeing.
In this way, factor analysis helps researchers understand complex relationships among many variables by grouping them into a smaller number of factors, simplifying the data analysis process, and facilitating the identification of patterns or structures within the data.
Here are some circumstances in which you might want to use factor analysis:
Factor Analysis has a wide range of applications across various fields. Here are some of them:
Advantages of Factor Analysis are as follows:
Disadvantages of Factor Analysis are as follows:
Researcher, Academic Writer, Web developer
Pm source apportionment using factor analysis-multiple regression (fa-mr) model: ... antimony. manganese. chromium, cadmium. source: as cited in sharma (1994) ... – powerpoint ppt presentation.
PowerShow.com is a leading presentation sharing website. It has millions of presentations already uploaded and available with 1,000s more being uploaded by its users every day. Whatever your area of interest, here you’ll be able to find and view presentations you’ll love and possibly download. And, best of all, it is completely free and easy to use.
You might even have a presentation you’d like to share with others. If so, just upload it to PowerShow.com. We’ll convert it to an HTML5 slideshow that includes all the media types you’ve already added: audio, video, music, pictures, animations and transition effects. Then you can share it with your target audience as well as PowerShow.com’s millions of monthly visitors. And, again, it’s all free.
About the Developers
PowerShow.com is brought to you by CrystalGraphics , the award-winning developer and market-leading publisher of rich-media enhancement products for presentations. Our product offerings include millions of PowerPoint templates, diagrams, animated 3D characters and more.
In today’s dynamic professional environment, presenting in-depth case studies becomes critical for businesses and individuals alike. This compilation features over 20 of the best PowerPoint templates specifically designed for presenting comprehensive and engaging case studies. Whether you aim to detail the analysis and strategies behind a business decision, or delve into a person’s journey, these assets can streamline your process and elevate your presentation.
Each template is carefully curated, equipped with slides that cater to an array of elements necessary for a persuasive case study – presenting research, displaying data, sharing interviews, and more. With these templates at your disposal, you can easily translate raw information into an insightful and visually appealing narrative.
Available in both free and paid options, these PowerPoint templates encompass a diverse set of designs and formats. Be it a start-up pitch or an academic research presentation, this post brings forward a wide variety of quality tools for crafting impactful case studies.
Get everything you need to give the perfect presentation. From just $16, get unlimited access to thousands of PowerPoint presentation templates, graphics, fonts, and photos.
Build Your PowerPoint Presentation
Blue case study powerpoint template.
The Blue Case Study PowerPoint Template offers a sleek and modern design, perfect for various presentations. Designed meticulously, this 18-slide multipurpose template allows users to easily edit graphics and texts. It’s user-friendly, simply drag and drop pictures into placeholders. The template, ideal for seminars, webinars, business presentations, arrives with a documentation file and free support. Recommended free web fonts included. Note, preview images aren’t included within download files.
The Black & Yellow Study PowerPoint Template is a sleek, easy-to-use resource perfect for presenting information in an engaging way. With 35 customizable slides, charts and graphs for data representation, and drop-and-drag image placeholders, it balances professional design with practical features. Ideal for students, workers, or any professional who needs to visually communicate information, this template enhances any presentation. Note: preview photos are not included.
Introducing the Vibrant Case Study PPT Template: a modern, versatile tool perfect for enhancing business presentations, project pitches and lookbook slides. Features include a 16:9 widescreen format, resizable and editable graphics, and a convenient drag & drop picture placeholder. The package comes with PowerPoint and XML files, as well as a helpful guidance file. Please note, images are not included.
The Stylish Case Study PowerPoint Template by Decentrace is a clean, contemporary, and professional-grade deck design perfect for various business endeavours. Whether it’s a case study proposal, a sales report, or a startup pitch, this template, boasting of 25 total slides, resizable graphics, and free fonts, is an excellent tool. It comes with a handy help file and allows for easy image placement. However, images shown are just previews and not included in the file.
The Case Study PowerPoint Template by RRGraph Design is an all-inclusive tool for enhancing your business presentations. With 30 unique slides, 90+ customizable XML files, and options for light and dark backgrounds, it transforms every stage of your business development into engaging visual stories. Handmade infographics give an authentic touch to your brand’s narrative. Please note, image stocks are not included.
The Case Study Presentation Template is a professional PowerPoint template designed to enhance the quality of your next presentation. It comes with a helpful ‘Read Me’ text file and includes 30 easily customizable slides in seven different color themes. Despite the absence of images, its organization into named groups and ability to change size, recolor, and more make it a highly versatile asset.
The Buminas Case Study PowerPoint Template is a clear, versatile tool that can be used for a wide range of business presentations including finance, marketing, management, and many more. Its features include 30 unique, easily editable slides, free web fonts, and widescreen ratio. Keep in mind, demo images are for preview purposes only and are not included in the files.
The Fun Case Study Presentation Template is a unique yet professional choice for those needing a clean, creative and straightforward template. It features more than 20 unique slides, theme color options, resizable graphics and drag and drop photo replacement. The full HD 16:9 ratio and the minimal design make your presentation visually appealing. Easy to customize in Microsoft PowerPoint to match your personal or company brand.
The Purple Case Study PowerPoint Template offers a professional style that is easy to fully customize according to your preferences. Offered in both a dark version and a light version, this template is editable in PowerPoint format files, allowing you to alter images, colours, and text. It also features unique font themes, a color scheme, image placeholders, and free font use. Please note, preview and image stocks are not included.
The Case Study Finance PowerPoint Template offers a sleek and professional look for various presentations. It’s great for financial reports, business meetings, project pitches, and other uses. With 30 unique slides, a light background, and all graphics being resizable and editable, this versatile tool makes it easy to customize your presentation. The package also includes XML files, an icon pack, and a help file. Note: Image stocks are not included.
The Study Case PowerPoint Template is a flexible and creative asset perfect for both corporate and personal presentations. Boasting a clean, elegant design with 60 total slides – split evenly between light and dark versions – all in a widescreen 16:9 ratio. This user-friendly template, including master slide layouts and a free font, can enhance your presentations, potentially attracting more customers. Note: Images used in preview not included.
The Case Study PowerPoint Presentation is a versatile and interactive creative template that is easily customizable. Crafted for a wide range of uses, from academic presentations to innovative team projects, you can personalize elements like text, images, and colors. Offering over 125 slides, 5 predefined color variations, animations, infographic icons, and an easy drag-and-drop picture replacement, it’s compatible with all versions of PowerPoint. Please note, original template images are not included.
The CeStudy Case Study PowerPoint Template is a resourceful tool designed to amplify your company’s presentations. It comes with 26 distinctive slides, features such as resizable and editable graphics, easy-to-edit colors, and a wide screen ratio. Supported by free, prompt customer service, this template also provides provisions for drag and drop images, enhancing the beauty and creativity of your content.
The Acropolis Case Study PowerPoint Template, provided by RRGraph Design, is an extensive asset for your presentations. With 45 unique slides, over 90 custom theme colors, and options for light or dark backgrounds, this template is fully customizable. It also includes handmade infographics to enhance your storytelling. Designed to accompany your business development stages, it’s a great tool for project presentations and brand recognition.
The Casevoke Case Study PowerPoint Template is a versatile presentation resource suitable for various purposes, including case studies, research, reports, and proposals. It offers 30 easily-editable master slides with 16:9 widescreen ratio, customizable graphics, a placeholder for pictures, and an included data chart. The usage of recommended free web fonts ensures an aesthetically appealing presentation. Please note, images in the demo are for preview purposes only.
The Busca Business Case Study PowerPoint is a universally adaptable presentation template, perfect for a spectrum of uses – from creative agencies and corporate business profiles to personal portfolios and start-ups. This asset, featuring 30 easily editable slides available in three color options, boasts a 16:9 wide screen ratio and a simple drag-and-drop mechanism. Please note, demo images are for preview only and not included in the file.
The Bresky Case Study PowerPoint Template offers a sleek and unique design for a variety of presentation needs. With 25 slides that have been carefully created for aesthetic appeal and usability, it’s a versatile choice for any business, portfolio or branding project. Easy to use and customizable, it focuses on strong typography and incorporates unique mockup devices and portfolio slides, providing a professional and modern feel to any presentation.
The Minimal Case Study PowerPoint Template is a versatile and user-friendly tool. Ideal for creative agencies, startups, corporations and more, it features 15 customizable slides and easy-to-edit elements. It has an intuitive drag-and-drop image feature, and the text, photos, shapes and colors are all easily adjustable. The template comes in a 16:9 ratio and uses free fonts. Note, images aren’t included.
The Case Study and Education PowerPoint Template offers a professional, ultra-modern design for educational and academic presentations. With 20 resizable and editable slides, this versatile template can be used for any topic, from school research projects to management seminars. With user-friendly features like drag-and-drop picture placeholders, free web fonts, and wide screen ratio, creating an engaging presentation becomes effortless.
Case Study Powerpoint Template is a sleek and professional presentation asset well-suited for those aiming for a clean, creative, and unique style. It features over 20 unique slides, a customizable color palette to match your brand, and is fully editable with easy-to-use drag and drop functions. With its high quality, resizable vector elements and free fonts, it’s an accessible tool to elevate your presentations.
The Case Study Business PowerPoint Template is a sleek, minimalist style presentation tool ideal for various needs such as business proposals, lookbooks, and project pitches. With 30 unique slides, light and dark backgrounds, resizable graphics, and a drag & drop image feature, it offers versatility and ease of use. The package includes PowerPoint files, color schemes, a help file, and an icon pack, although images must be supplied separately.
The Case Study Presentation Template is a unique, clean, and professional PowerPoint tool perfect for creating captivating presentations. With over 20 unique, easy-to-edit slides, a full HD 16:9 ratio, and a master slide layout allowing easy photo replacement, this asset is a time-saver. The minimalistic and creative design makes for engaging presentations that align with your brand’s aesthetics.
The Scilast Study Case Lab Template PowerPoint is a versatile and artistically designed presentation tool. Perfect for both corporate and individual presentations, it boasts of a total of 60 slides, with an equal mix of light and dark themes to suit your style. It’s easily customizable with a widescreen ratio of 16:9 and includes master slide layouts. Please note, images used in previews are not included.
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Exploring the potential of variational autoencoders for modeling nonlinear relationships in psychological data.
Milano, N.; Casella, M.; Esposito, R.; Marocco, D. Exploring the Potential of Variational Autoencoders for Modeling Nonlinear Relationships in Psychological Data. Behav. Sci. 2024 , 14 , 527. https://doi.org/10.3390/bs14070527
Milano N, Casella M, Esposito R, Marocco D. Exploring the Potential of Variational Autoencoders for Modeling Nonlinear Relationships in Psychological Data. Behavioral Sciences . 2024; 14(7):527. https://doi.org/10.3390/bs14070527
Milano, Nicola, Monica Casella, Raffaella Esposito, and Davide Marocco. 2024. "Exploring the Potential of Variational Autoencoders for Modeling Nonlinear Relationships in Psychological Data" Behavioral Sciences 14, no. 7: 527. https://doi.org/10.3390/bs14070527
Further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Explore all metrics
The construction of ore pass systems in underground mines is a high-risk activity, especially in an environment with incompetent rock mass. This study aims to investigate the optimal method for ore pass construction in incompetent rock masses. We evaluated the conventional and raise boring (RB) methods based on safety, efficiency, excavation control, and ground support for ore pass construction. We also performed a stability analysis using analytical Q-raise ( Q R method) and kinematic analysis methods for ore pass construction with a Raise Borer before and after grout injection of the rock mass. As a case study, an ore pass (diameter, 3 m; depth, 100 m) within an incompetent rock mass was considered to gain further insight. The rock mass was characterized according to the classification methods Q Barton, rock quality designation (RQD), rock mass rating (RMR), and geological strength index (GSI). The grout intensity number (GIN) method of grout injection is used. The safety factor (<1.075) obtained before injection was lower than the acceptance criteria in all sections of the rock mass. However, grout injection before Raise Borer excavation resulted in a rock mass safety factor greater than 1.5. Using RB without pre-grouting in the case study indicated that the maximum unsupported diameter (MUSD) of the ore pass was less than the required 3 m. On the contrary, an MUSD of the rock mass post-grouting was equal to or larger than 3 m at all depths.
This is a preview of subscription content, log in via an institution to check access.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Data availability.
The datasets analyzed during the current study are available from the corresponding author upon reasonable request.
Burgos S (2015) Desarrollo de herramientas de diseño para la estabilidad de excavaciones con entrada de personal, Santiago de Chile (Development of design tools for the stability of excavations with worker entry), [Thesis, Universidad de Chile]
Rivera M (2015) Construccion de chimeneas raise borer para optimizar el proceso de minado y los costos de explotacion en el tajo 355 de Reina Leticia en Compañia Minera Raura S.A. Huancayo-Perú (Construction ore pass with raise borer to optimize the mining process and exploitation costs in pit 355 of Reina Leticia at Compania Minera Raura S.A. Huancayo-Peru) [Thesis, Universidad Nacional Del Centro Del Perú]
Zuni J (2015) Construccion de chimeneas de equilibrio con plataforma elevadora Alimak en las obras subterráneas del proyecto hidroelectrico Misicuni (Construction of balancing shafts with Alimak raising platform in the underground Misicuni hydroelectric Project), Cochabamba-Bolivia. https://alicia.concytec.gob.pe/vufind/Record/UNSA_edc7332f00ef8c7ac252a8ec348f536f
Medina C (2020) Estudio comparativo técnico-económico de diseño de chimenea, caso chimenea mina Pajonales (Comparative technical-economic study of ore pass design, Pajonales mine ore pass case study) (Manual-Alimak-Raise Boring). Concepción, [Doctoral dissertation, Universidad Andrés Bello]
Subash TR, Abhilash UKR, Ananth M, Tamilsevan K (2016) Pre-grouting for leakage control and rock improvement. J Civil Environ Eng 6(226):2. https://doi.org/10.4172/2165-784X.1000226
Article Google Scholar
Beck and Thomas (2014) Flexible rock support application and methods. Norwegian Tunneling Technology 23:87–96
Google Scholar
Andia F (2019) Diseño de chimeneas gemelas para mejorar la ventilación en los niveles 1790 - 2050 veta Paula CIA Minera Yanaquihua, Arequipa (Design of twin ore passes to improve ventilation at levels 1790 - 2050 vein Paula CIA Minera Yanaquihua, Arequipa). https://alicia.concytec.gob.pe/vufind/Record/UNSA_8a20319056d4e73c5686f7e1c671d891
Soria J (2013) Optimización de costos en la construcción de chimeneas con trepadoras Alimak unidad Parcoy-Consorcio Minero Horizonte 2012 (Optimization of costs in the construction of ore pass with Alimak raise climber Parcoy-Consorcio Minero Horizonte unit 2012), [Doctoral dissertation, Universidad Nacional Micaela Bastidas Deapurímac].
Quinto Robles J (2019) Análisis geomecánico en la ejecución del Raise Borer 19 Mina Islay (Geomechanical analysis in the execution of the Raise Borer 19 Islay Mine), [Thesis, Universidad Nacional Daniel Alcides Carrión]
Choque C (2011) Análisis comparativo de métodos mecanizados para la construcción de chimeneas en la Unidad Minera Retamas-Parcoy (Comparative analysis of mechanized methods for the construction of ore pass in the Retamas-Parcoy Mining Unit). Tacna-Perú [Thesis, Universidad Nacional Jorge Basadre Grohmann]
Cajahuanca Sánchez J (2019) Influencia del sistema Raise Boring en la ventilación de la Zona I-B NV 4530 de la Veta Alexia de la Unidad Minera Arcata - Cía Minera Ares S.AC (Influence of the Raise Boring system on the ventilation of Zone I-B NV 4530 of the Alexia Vein of the Arcata Mining Unit – Cía Minera Ares S.AC), [Thesis, Universidad Nacional del Centro Del Perú]
Bedoya Cabrera WW (2018) Ejecución de chimenea con el método raise boring para la optimización del sistema de ventilación en la Unidad Minera San Rafael (Execution of ore pass with the raise boring method for the optimization of the ventilation system in the San Rafael Mining Unit)- Minsur – 2018, [Thesis, National University of Santiago Antunez Mayolo]
Contreras LLica LE (2015) Perforación de Chimeneas con el método raise boring en la unidad minera arcata (Ore pass drilling with Raise Boring Method at the Arcata Mining Unit) [Thesis, Universidad Nacional de San Agustín de Arequipa]
Penney AR, Stephenson RM, Pascoe MJ (2018) Raise bore stability and risk assessment empirical database update. In: Proceedings The Fourth Australasian Ground Control in Mining Conference, Sydney, Australia November
Deere DU, Hendron AJ, Patton FD, Cording EJ (1966) Design of surface and near-surface construction in rock. The 8th US Symposium on rock mechanics (USRMS). OnePetro
Barton N, Lien R, Lunde JJRM (1974) Engineering classification of rock masses for the design of tunnel support. Rock Mech 6:189–236
Bieniawski ZT (1989) Engineering rock mass classifications: a complete manual for engineers and geologists in mining, civil, and petroleum engineering. John Wiley & Sons
Hoek E, Marinos P (2000) Predicting tunnel squeezing problems in weak heterogeneous rock masses. Tunnels and Tunneling International 32(11):45–51
Vaskou P, de Quadros EF, Kanji MA, Johnson T, Ekmekci M (2019) ISRM suggested method for the Lugeon test. Rock Mech Rock Eng 52:4155–4174
Fernandez H, de Marzo De (2015) Inyecciones de suelos y rocas capitulo 6 (Injection of soil and rock, chapter 6). Obtained from https://de.slideshare.net/jaimeamambalzambrano/cap-6-inyecciones-de-suelos-yrocas?nomobile=true
McCracken A, Stacey TR (1989) Geotechnical risk assessment for large-diameter raise-bored shafts. Shaft engineering conference. pp 309–316
Lombardi G, Deere D (1993) Diseño y control del inyectado empleando el principio “GIN” (Design and control of injected using the “GIN” principle). In: Water Power and Dams Construction - Traduzido por Ulrich. Hungsberg. México, p 35
Hoek E (2023) Practical rock engineering: RocScience. Available from the publisher at https://static.rocscience.cloud/assets/resources/learning/hoek/Practical-Rock-Engineering-E.Hoek-2023.pdf .
Hoek E, Brown ET (1997) Practical estimates of rock mass strength. Int J Rock Mech Min Sci 34(8):1 165-1, 186
Kirsch G (1898) Die theorie der elastizitaet und die deduerfnisse der festigkeitlehre. Zeitschrift des Vereines deutscher Ingenieure (The theory of elasticity and the requirements of strength theory. Journal of the Association of German Engineers 42:797–807
Hoek E, Bray JW (1981) Rock slope engineering. Revised 3rd Edition. The Institution of Mining and Metallurgy, London, pp 341–351
Book Google Scholar
Download references
We would like to thank Skava Consulting SA for providing the data and resources needed to carry out this research work.
Authors and affiliations.
Department of Mining Engineering, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna # 4686, Santiago, Chile
Cluber Rojas & Eduardo Cordova
Department of Mining and Geological Engineering, The University of Arizona, Tucson, AZ, 85721, USA
Angelina Anani, Edward Wellman & Sefiu O. Adewuyi
Department of Orthopaedics, University of Illinois at Chicago, 835 S Wolcott Ave, Chicago, IL, 60612, USA
Wedam Nyaaba
You can also search for this author in PubMed Google Scholar
Correspondence to Angelina Anani .
Competing interests.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Rojas, C., Anani, A., Cordova, E. et al. Analysis of Raise Boring with Grouting as an Optimal Method for Ore Pass Construction in Incompetent Rock Mass—A Case Study. Mining, Metallurgy & Exploration (2024). https://doi.org/10.1007/s42461-024-01023-0
Download citation
Received : 01 August 2023
Accepted : 03 June 2024
Published : 21 June 2024
DOI : https://doi.org/10.1007/s42461-024-01023-0
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
BMC Urology volume 24 , Article number: 131 ( 2024 ) Cite this article
52 Accesses
Metrics details
The incidence of recurrent hernia after radical resection of prostate cancer is high, so this article discusses the incidence and risk factors of inguinal hernia after radical resection of prostate cancer.
This case control study was conducted in The First People’s Hospital of Huzhou clinical data of 251 cases underwent radical resection of prostate cancer in this hospital from March 2019 to May 2021 were retrospectively analyzed. According to the occurrence of inguinal hernia, the subjects were divided into study group and control group, and the clinical data of each group were statistically analyzed, Multivariate Logistic analysis was performed to find independent influencing factors for predicting the occurrence of inguinal hernia. The Kaplan-Meier survival curve was drawn according to the occurrence and time of inguinal hernia.
The overall incidence of inguinal hernia after prostate cancer surgery was 14.7% (37/251), and the mean time was 8.58 ± 4.12 months. The average time of inguinal hernia in patients who received lymph node dissection was 7.61 ± 4.05 (month), and that in patients who did not receive lymph node dissection was 9.16 ± 4.15 (month), and there was no significant difference between them ( P > 0.05). There were no statistically significant differences in the incidence of inguinal hernia with age, BMI, hypertension, diabetes, PSA, previous abdominal operations and operative approach ( P > 0.05), but there were statistically significant differences with surgical method and pelvic lymph node dissection ( P < 0.05). The incidence of pelvic lymph node dissection in the inguinal hernia group was 24.3% (14/57), which was significantly higher than that in the control group 11.8% (23/194). Logistic regression analysis showed that pelvic lymph node dissection was a risk factor for inguinal hernia after prostate cancer surgery (OR = 0.413, 95%Cl: 0.196–0.869, P = 0.02). Kaplan-Meier survival curve showed that the rate of inguinal hernia in the group receiving pelvic lymph node dissection was significantly higher than that in the control group ( P < 0.05).
Pelvic lymph node dissection is a risk factor for inguinal hernia after radical resection of prostate cancer.
Peer Review reports
Prostate cancer is a common malignant tumor in urology, which occurs in the prostate epithelial tissue, There are an average of 190,000 new cases of prostate cancer each year and about 80,000 deaths worldwide each year [ 1 , 2 ]. In recent years, the incidence of prostate cancer has increased year by year, seriously affecting the health and quality of life of patients [ 3 ]. Worldwide, the incidence of prostate cancer is second only to lung cancer, and its death rate ranks 7th among male cancer causes [ 4 ]. Radical resection of prostate cancer (RP) is the main means for the treatment of prostate cancer, and the surgical methods are generally divided into open radical resection of prostate cancer (RRP) and minimally invasive radical resection of prostate cancer, the latter including laparoscopic radical resection of prostate cancer (LRP) and robot-assisted laparoscopic radical resection of prostate cancer (RALP) [ 5 , 6 , 7 ].
Inguinal hernia (IH) is a relatively common disease in clinic, which is caused by increased abdominal pressure, thinning of abdominal wall, and bulging of abdominal organs. Inguinal hernias include direct hernias, oblique hernias and femoral hernias [ 8 ]. At the onset, lumps protruding outward from the inguinal region can be seen. If the intestines cannot return to the abdominal cavity in time, it is easy to cause intestinal necrosis, intestinal obstruction, intestinal perforation and other complications, which may endanger the life safety of patients in severe cases [ 9 , 10 ].
With the extensive development of radical resection of prostate cancer in various hospitals, the problem of postoperative inguinal hernia has gradually attracted the attention of urologists. The previously reported incidence of IH after radical prostate cancer surgery was approximately 13.7% [ 11 ]. A study by Nagatani S et al. showed that the incidence of inguinal hernia after radical prostate cancer surgery was 7-21%, most of which occurred within 2 years after surgery [ 12 ]. A study by Stranne J et al. showed that the cumulative risk of IH occurrence within 48 months in open radical resection for prostate cancer group and non-surgical group was 12.2% and 5.8%, respectively [ 13 ]. Most cases of IH require surgery due to pain, discomfort, and incarceration and are considered an advanced complication of radical resection of prostate cancer. The adhesion after radical resection of prostate cancer also increases the difficulty of hernia repair. Therefore, urologists need to be concerned not only about the risk of urinary incontinence and erectile dysfunction after radical resection of prostate cancer, but also about the occurrence of IH.
In recent 10 years, many scholars around the world have studied the risk factors of inguinal hernia after radical prostate cancer surgery. Currently, most of the studies believe that anastomotic stenosis, previous history of inguinal hernia, and patent processus vaginalis are risk factors, However there is no consensus on the risk of lymph node dissection. For example, Niitsu H et al. believed that pelvic lymph node dissection during radical prostate cancer operation might damage the pectineal foramina, thereby increasing the risk of inguinal hernia [ 14 ]. Contrary to the results of Johan Stranne’s study, the author suggested that previous incidence of inguinal hernia and advanced age increased the risk of inguinal hernia after radical prostate cancer surgery, and pelvic lymph node dissection was not a significant risk factor [ 15 ]. There is also no consistent conclusion on the influence of BMI, age and surgical method.
Therefore, in order to further investigate the risk factors of inguinal hernia after radical prostate cancer surgery, especially the correlation between pelvic lymph node dissection and inguinal hernia, this study was conducted. This study retrospectively analyzed the clinical data of 251 patients who underwent radical resection of prostate cancer in our hospital from March 2019 to May 2021, and investigated the risk factors of postoperative inguinal hernia. It is reported as follows:
The objective of this study was to explore the incidence and risk factors of inguinal hernia after radical resection of prostate cancer, which provides reference for further research and guide the clinician to choose the appropriate surgical method according to the patient’s condition.
The patient was also examined by B-ultrasound every 3 months at the outpatient PSA review to verify the occurrence of inguinal hernia. The subjects were divided into the inguinal hernia group (study group) and the non-inguinal hernia group (control group), If the diagnosis of inguinal hernia occurred, the follow-up was completed, and the type and time of inguinal hernia were recorded; otherwise, the follow-up was 2 years, and the relevant clinical parameters of each group were statistically analyzed (age, BMI, hypertension, diabetes mellitus, PSA value, previous abdominal operations, operation methods, operative approach, pelvic lymph node dissection)and the correlation between these parameters and the occurrence of inguinal hernia was analyzed, and the risk factors of inguinal hernia were found by Logistic regression analysis. According to the occurrence and time of inguinal hernia, Kaplan-Meier survival curve was drawn to compare the differences between the two groups.
The content of this study has been approved by the Ethics Committee of our hospital(approval number, 2,018,137). All patients signed informed consent forms. This is the protocol was registered on the Chinese Clinical Trial Registry. The study is planned to begin in mid-March 2019 and is planned to end by May 2021.
Patients who received radical surgery for prostate cancer in Huzhou First People’s Hospital from March 2019 to May 2021; PSA was reviewed every 3 months after surgery, and check the inguinal area for protruding masses. Complete the 2-year follow-up plan.
Patients with inguinal hernia before operation; patients with prior inguinal hernia surgery.
SPSS 21.0 statistical software was used for statistical processing, the research data followed normal distribution, and the measured data were represented by X ± S. P < 0.05 was considered statistically significant.
From March 2019 to May 2021, 318 cases of radical prostatectomy were performed in our hospital, during the follow-up period, a total of 28 cases died of other diseases, a total of 39 cases were lost to follow-up or clinical data were incomplete, and a total of 251 cases were finally followed up. There were no significant differences in age, BMI, hypertension, diabetes, PSA, previous abdominal operations and operative approach between the two groups ( P > 0.05), while there were significant differences in surgical method and pelvic lymph node dissection ( P < 0.05). The incidence of pelvic lymph node dissection in the inguinal hernia group 24.3% (14/57) was significantly higher than that in the control group 11.8% (23/194). See Table 1 for details.
Multivariate Logistic regression analysis of risk factors showed that pelvic lymph node dissection was a risk factor for inguinal hernia after prostate cancer surgery (OR =0.413, 95%Cl: 0.196-0.869, P = 0.02). There was no statistical significance in age, BMI, hypertension, diabetes, PSA value, previous abdominal operations, operation method, operative approach were not risk factors for inguinal hernia ( P > 0.05). See Table 2 for details.
The cases of inguinal hernia were grouped according to whether or not they had received pelvic lymph node dissection. The incidence and time of inguinal hernia in the two groups were recorded, and the Kaplan-Meier survival curve was drawn. The overall incidence of inguinal hernia after radical resection of prostate cancer was 14.7% (37/251), There were 26 cases with indirect hernia, accounting for 70.2% (26/37), 21.6% (8/37) with direct hernia, 8.2% (3/37) with oblique hernia and direct hernia, and the mean time of occurrence was 8.58 ± 4.12 months. The average time of inguinal hernia was 7.61 ± 4.05 (month) for those who received lymph node dissection and 9.16 ± 4.15 (month) for those who did not receive lymph node dissection, and there was no significant difference between them ( P > 0.05). The incidence of inguinal hernia in the group receiving pelvic lymph node dissection was significantly higher than that in the control group ( P < 0.05). See Fig. 1 for details.
Survival curve of pelvic lymph node dissection and inguinal hernia (month)
In recent years, the incidence of prostate cancer has increased year by year, seriously affecting the health and quality of life of patients, the complications after radical prostate cancer surgery mainly include urinary incontinence and sexual dysfunction, but inguinal hernia is also one of the common complications [ 16 ]. Liu L et al. found that open radical resection for prostate cancer technique and advanced patient age, especially those over 80 years old, are associated with a higher incidence of IH. Appropriate prophylaxis during surgery should be evaluated in high-risk patients [ 17 ].In some regional studies, low BMI has been identified as a risk factor for IH, and the risk threshold for BMI has not been determined, which is about BMI < 25 kg/m2 [ 18 ]. However, a number of studies have found that low BMI does not increase the risk of postoperative IH [ 19 , 20 ]. At present, there is no uniform conclusion on the risk of IH between open radical resection for prostate cancer and laparoscopic radical prostatectomy. The study of Alder R scholars believed that the incidence of IH after laparoscopic radical prostatectomy was relatively low [ 21 ], while Otaki T’s study shows that the incidence of IH after laparoscopic radical prostatectomy is 7.3% and that of open radical resection for prostate cancer is 8.4%, showing no statistical difference between them [ 20 ]. There is no consensus on whether pelvic lymph node dissection is a risk factor for inguinal hernia [ 14 , 15 ]. In short, the specific mechanism of inguinal hernia after radical prostate cancer surgery is unclear.
This study retrospectively analyzed the clinical data of 251 cases treated in our hospital, and found that the overall incidence of inguinal hernia was 14.7% (37/251), which was consistent with most of the current research results. We also found that the average time of occurrence of inguinal hernia after surgery was 8.58 ± 4.12 months, which provided certain guidance for our postoperative follow-up time.
In this study, through Logistic multivariate analysis, it was found that pelvic lymph node dissection was a risk factor for inguinal hernia after prostate cancer surgery (OR = 0.413, 95%Cl: 0.196–0.869, P = 0.02). There was no statistical significance in age, BMI, hypertension, diabetes, PSA value, previous abdominal operations, operation method, operative approach and the occurrence of inguinal hernia after prostate cancer surgery ( P > 0.05),but there were statistically significant differences with surgical method and pelvic lymph node dissection ( P < 0.05). Therefore, the advantages and disadvantages of pelvic lymph node dissection should be reasonably evaluated for low-medium-risk prostate cancer patients, so as to avoid the occurrence of inguinal hernia. By drawing Kaplan-Meier survival curve, it was found that the rate of inguinal hernia in the group receiving pelvic lymph node dissection was significantly higher than that in the control group. Some studies believe that pelvic lymph node dissection during radical resection of prostate cancer operation will cause postoperative scar contraction in the inguinal region, resulting in an increase in abdominal pressure outward and downward, resulting in an increase in the incidence of inguinal hernia. Lodding P designed a comparative study between the group of radical resection of prostate cancer plus pelvic lymph node dissection, the group of pelvic lymph node dissection and the group without operation. They found that the incidence of inguinal hernia in the three observation groups was 13.6%, 7.6% and 3.1%, respectively, and the difference between the prostatectomy group and the group without operation was statistically significant. There was no significant difference between the group and pelvic lymph node dissection group. This result implies that pelvic lymph node dissection is an important factor in the development of inguinal hernia [ 22 ]. Another Sun M study compared the incidence of inguinal hernias after radical prostate cancer surgery and pelvic lymph node dissection alone, and showed that the risk of inguinal hernias increased by 6.8% and 7.8% at 5 and 10 years, respectively, in the radical prostate cancer resection group compared with the pelvic lymph node dissection group [ 23 ]. Niitsu H et al. believed that pelvic lymph node dissection during radical resection of prostate cancer might damage the pectineal foramina, while inguinal hernia originated from the defective pectineal foramina [ 14 ].
Shimbo M et al. found that due to prostatectomy and vesicourethral anastomosis, preoperative and postoperative sagittal MRI images showed that the rectovesical excavation (RE) was moved downward by about 2 to 3 cm [ 24 ]. Accordingly, they speculated that due to the displacement of RE, the peritoneum and vas deferens after urethrovesical anastomosis were pulled, which further pulled the opening of the inner ring and caused it to shift medially, which led to the occurrence of postoperative IH. Based on this theory, many scholars have prevented the occurrence of hernia after operation by reducing the tension of peritoneum and vas deferens at the inner ring and ligation and rupture of sheathing process. Several other articles have reported the role of preserving the retropubic space (RS) in preventing IH after radical resection of prostate cancer. Chang KD et al. found that robot-assisted laparoscopic radical prostatectomywith retained Retzius space significantly reduced the incidence of postoperative IH compared with standard robot-assisted laparoscopic radical prostatectomy [ 25 ]. In addition, the study of Matsubara et al. also showed that compared with standard open radical resection for prostate cancer, the incidence of IH after transperineal radical resection of prostate cancer with retained anatomical structures such as the Retzius space was lower [ 26 ]. Therefore, urological surgeons can take some effective measures in the operation to prevent the recurrence of inguinal hernia.
In this study, we identified risk factors for inguinal hernia after pelvic lymphadenectomy for prostate cancer. Other risk factors such as age, BMI, hypertension, diabetes mellitus, PSA value, history of abdominal surgery, operative method, operative approach were not significant in multivariate analysis, which was inconsistent with the results of Iwamoto H et al [ 27 ]. They found that dilatation of the right internal inguinal ring and different manipulation of the medial peritoneal incision of the ventral femoral ring were independent risk factors for IH after laparoscopic radical prostatectomy. The reason why postoperative IH occurs more often on the right side is not known. Alder R et al. found that the incidence of IH after open radical prostate cancer treatment was significantly higher than laparoscopic radical prostate cancer treatment [ 21 ], but our study did not show a difference between the two groups, possibly due to the small number of cases included in open radical prostate surgery.
In summary, the incidence of inguinal hernia after radical prostate cancer surgery is relatively high, and the specific cause is still unclear. Our study shows that pelvic lymph node dissection is a risk factor for inguinal hernia.
The sample size of this study is small, and it belongs to a single-center study, so the representativeness of the research conclusions may not be strong. This time, we followed up the samples for 2 years, which was not long enough and may have overlooked the real incidence of inguinal hernia. In addition, this study is a retrospective study, and the clinical parameters observed are not very comprehensive, which may ignore the influence of other factors on the IH. Because our data is derived from clinical data, some data cannot be detected. These problems need further study by more scholars.
We cannot provide and share our datasets in publicly available repositories because of informed consent for participants as confidential patient data. Data may be obtained from the corresponding author upon reasonable request.
Sekhoacha M, Riet K, Motloung P et al. Prostate Cancer Review: Genetics, diagnosis, Treatment options, and alternative approaches. Molecules 2022; 27.
Rawla P. Epidemiology of prostate Cancer. World J Oncol. 2019;10(2):63–89.
Article CAS PubMed PubMed Central Google Scholar
Vietri MT, D’Elia G, Caliendo G et al. Hereditary prostate Cancer: genes related, Target Therapy and Prevention. Int J Mol Sci 2021; 22.
Williams IS, McVey A, Perera S, et al. Modern paradigms for prostate cancer detection and management. Med J Aust. 2022;217:424–33.
Article PubMed PubMed Central Google Scholar
Achard V, Panje CM, Engeler D, et al. Localized Local Adv Prostate Cancer: Treat Options Oncol. 2021;99:413–21.
CAS Google Scholar
Davis M, Egan J, Marhamati S, et al. Retzius-Sparing Robot-assisted robotic prostatectomy: past, Present, and Future. Urol Clin North Am. 2021;48:11–23.
Article PubMed Google Scholar
Heidenreich A, Pfister D. Radical cytoreductive prostatectomy in men with prostate cancer and oligometastatic disease. Curr Opin Urol. 2020;30:90–7.
Miller HJ. Inguinal hernia: mastering the anatomy. Surg Clin North Am. 2018;98:607–21.
Gamborg S, Marcussen ML, Öberg S, Rosenberg J. Inguinal hernia repair but no Hernia Present: a Nationwide Cohort Study. Surg Technol Int. 2022;40:171–4.
PubMed Google Scholar
Chien S, Cunningham D, Khan KS. Inguinal hernia repair: a systematic analysis of online patient information using the modified ensuring Quality Information for patients tool. Ann R Coll Surg Engl. 2022;104:242–8.
CAS PubMed PubMed Central Google Scholar
Perez AJ, Campbell S. Inguinal hernia repair in older persons. J Am Med Dir Assoc. 2022;23(4):563–7.
Nagatani S, Tsumura H, Kanehiro T, et al. Inguinal hernia associated with radical prostatectomy. Surg Today. 2021;51:792–7.
Stranne J, Johansson E, Nilsson A, et al. Inguinal hernia after radical prostatectomy for prostate cancer: results from a randomized setting and a nonrandomized setting. Eur Urol. 2010;58:719–26.
Niitsu H, Taomoto J, Mita K, et al. Inguinal hernia repair with the mesh plug method is safe after radical retropubic prostatectomy. Surg Today. 2014;44:897–901.
Stranne J, Hugosson J, Lodding P. Post-radical retropubic prostatectomy inguinal hernia: an analysis of risk factors with special reference to preoperative inguinal hernia morbidity and pelvic lymph node dissection. J Urol. 2006;176:2072–6.
Tolle J, Knipper S, Pose R, et al. Evaluation of risk factors for adverse functional outcomes after Radical Prostatectomy in patients with previous transurethral surgery of the prostate. Urol Int. 2021;105:408–13.
Liu L, Xu H, Qi F, et al. Incidence and risk factors of inguinal hernia occurred after radical prostatectomy-comparisons of different approaches. BMC Surg. 2020;20(1):218.
Nilsson H, Stranne J, Hugosson J, et al. Risk of hernia formation after radical prostatectomy: a comparison between open and robot-assisted laparoscopic radical prostatectomy within the prospectively controlled LAPPRO trial. Hernia. 2022;26:157–64.
Article CAS PubMed Google Scholar
Sim KC, Sung DJ, Han NY, et al. Preoperative CT findings of subclinical hernia can predict for postoperative inguinal hernia following robot-assisted laparoscopic radical prostatectomy. Abdom Radiol (NY). 2018;43:1231–6.
Otaki T, Hasegawa M, Yuzuriha S, et al. Clinical impact of psoas muscle volume on the development of inguinal hernia after robot-assisted radical prostatectomy. Surg Endosc. 2021;35:3320–8.
Alder R, Alder R, Rosenberg J. Incidence of Inguinal Hernia after Radical Prostatectomy: a systematic review and Meta-analysis. J Urol. 2020;203(2):265–74.
Lodding P, Bergdahl C, Nyberg M, et al. Inguinal hernia after radical retropubic prostatectomy for prostate cancer: a study of incidence and risk factors in comparison to no operation and lymphadenectomy. J Urol. 2001;166:964–7.
Sun M, Lughezzani G, Alasker A, et al. Comparative study of inguinal hernia repair after radical prostatectomy, prostate biopsy, transurethral resection of the prostate or pelvic lymph node dissection. J Urol. 2010;183:970–5.
Shimbo M, Endo F, Matsushita K, et al. Risk factors and a Novel Prevention technique for Inguinal Hernia after Robot-assisted radical prostatectomy. Urol Int. 2017;98:54–60. Incidence.
Chang KD, Abdel Raheem A, Santok GDR, et al. Anatomical Retzius-space preservation is associated with lower incidence of postoperative inguinal hernia development after robot-assisted radical prostatectomy. Hernia. 2017;21:555–61.
Matsubara A, Yoneda T, Nakamoto T, et al. Inguinal hernia after radical perineal prostatectomy: comparison with the retropubic approach. Urology. 2007;70:1152–6.
Iwamoto H, Morizane S, Hikita K, et al. Postoperative inguinal hernia after robotic-assisted radical prostatectomy for prostate cancer: evaluation of risk factors and recommendation of a convenient prophylactic procedure. Cent Eur J Urol. 2019;72(4):418–24.
Google Scholar
Download references
Not applicable.
This work was supported by the following funding: the grant 2019GY23 from Huzhou Science and Technology Bureau Public welfare application research project of China.
Authors and affiliations.
Department of Urology, The First People’s Hospital of Huzhou, #158, Square Road, Huzhou, 313000, China
An-Ping Xiang, Yue-Fan Shen, Xu-Feng Shen & Si-Hai Shao
Department of Urology, Huzhou Key Laboratory of Precise Diagnosis and Treatment of Urinary Tumors, Huzhou, 313000, China
An-Ping Xiang
You can also search for this author in PubMed Google Scholar
An-Ping Xiang designed the study and drafted and revised the manuscript, Yue-Fan Shen recorded the patients cases, Xu-Feng Shen participated in the follow-up. An-Ping Xiang and Si-Hai Shao analyzes the data and draw graphs.
Correspondence to Si-Hai Shao .
Ethics approval and consent to participate.
The study protocol was approved by the ethics committee of the First People’s Hospital of Huzhou (approval number, 2018137). We have obtained written informed consent from all study participants. All of the procedures were performed in accordance with the Declaration of Helsinki and relevant policies in China.
Competing interests.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Xiang, AP., Shen, YF., Shen, XF. et al. Correlation between the incidence of inguinal hernia and risk factors after radical prostatic cancer surgery: a case control study. BMC Urol 24 , 131 (2024). https://doi.org/10.1186/s12894-024-01493-w
Download citation
Received : 24 September 2023
Accepted : 30 April 2024
Published : 22 June 2024
DOI : https://doi.org/10.1186/s12894-024-01493-w
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1471-2490
COMMENTS
Factor analysis is used to describe the relationship between many variables in terms of a few underlying factors. It involves 3 stages: 1) generating a correlation matrix, 2) extracting factors from this matrix using principal component analysis, and 3) rotating factors using varimax rotation. The output includes communalities, a scree plot ...
Two-Common Factor Model : The Oblique Case F 1 Y 1 Y 2 Y 3 δ 1 δ 2 δ 3 λ 11 λ 21 λ 31 F 2 Y 4 Y 5 Y 6 δ 4 δ 5 δ 6 λ 12 λ 62 λ 41 λ 51 λ 61 λ 52 λ 42 λ 22 λ 32 Given all variables in standardized form, i.e. var(Y i)=var(F i)=1; AND oblique factors (i.e. cov(F 1,F 2)≠0) ! The interpretation of factor loadings: λ ij is no ...
Factor analysis is a statistical technique used to reduce a large set of variables into a smaller set of underlying factors or dimensions. It examines the interrelationships among variables to define common dimensions called factors that can help explain correlations. Factor analysis is used to identify the underlying structure in a data set ...
Factor analysis (fa) Here are the steps I would take to analyze this data using exploratory factor analysis: 1. Check assumptions - Sample size of 300 is adequate - Most correlations are between .3 and .8 2. Extract initial factors using principal axis factoring - Kaiser's criterion suggests 4 factors with eigenvalues > 1 3.
Overview. Factor Analysis is a method for modeling observed variables, and their covariance structure, in terms of a smaller number of underlying unobservable (latent) "factors.". The factors typically are viewed as broad concepts or ideas that may describe an observed phenomenon. For example, a basic desire of obtaining a certain social ...
DevPsy.org * In the New York Longitudinal Study (Ch. 16), Thomas, Chess, and Birth wanted to infer an underlying temperament by just looking a behaviors. ... Factor Analysis Author: K. H. Grobman Last modified by: K. H. Grobman Created Date: 8/15/2008 9:10:02 PM Document presentation format: On-screen Show Company: DevPsy.org
§1 Introduction Factor analysis (FA) is a method of simplifying data. The essential purpose of FA is to describe, if possible, the covariance relationship among many variables in terms of a few underlying, but unobservable, random quantities called factors. For example, in studies about corporate image or brand image, customers can evaluate malls' performance by the use of an index system ...
The scree plot below relates to the factor analysis example later in this post. The graph displays the Eigenvalues by the number of factors. Eigenvalues relate to the amount of explained variance. The scree plot shows the bend in the curve occurring at factor 6. Consequently, we need to extract five factors.
Download ppt "Lecture 12 Factor Analysis." Factor Analysis Factor analysis is a general name denoting a class of procedures primarily used for data reduction and summarization. Factor analysis is an interdependence technique in that an entire set of interdependent relationships is examined without making the distinction between dependent and ...
Factor Analysis is a method for modeling observed variables, and their covariance structure, in terms of a smaller number of underlying unobservable (latent) "factors." The factors typically are viewed as broad concepts or ideas that may describe an observed phenomenon. For example, a basic desire of obtaining a certain social level might explain most consumption behavior.
Exploratory Factor Analysis Spss Ppt Factor Analysis - Model Adequacy, Rotation, Factor Scores and Case Study. (Refer Slide Time: Then I will show you Spss exploratory factor analysis. 4) It's a data driven, exploratory statistical procedure (no theory). 5) FA is a) Principal factors (= principal axis factoring = factor analysis =
Factor Analysis: A Brief Synopsis of Factor Analytic Methods With an Emphasis on Nonmathematical Aspects. Timothy D. Kruse, M.S.Ed. Texas A&M University Commerce. Factor Analytic Methods • Factor analysis is a set of mathematical techniques used to identify dimensions underlying a set of empirical measurements.
Confirmatory Factor Analysis (CFA) is a particular form of factor analysis, most commonly used in social. research. In confirmatory factor anal ysis, the researcher first develops a hypothesis ...
Factor Analysis-Presentation DATA ANALYTICS. This document discusses factor analysis, a technique used to identify underlying dimensions or factors within a set of variables. It provides definitions of key terms like factor loadings, communality, scree plot, and factor scores. It also presents an example factor analysis using data on salespeople.
Purpose. This seminar is the first part of a two-part seminar that introduces central concepts in factor analysis. Part 1 focuses on exploratory factor analysis (EFA). Although the implementation is in SPSS, the ideas carry over to any software program. Part 2 introduces confirmatory factor analysis (CFA).
factor analysis.ppt - Free download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) or view presentation slides online. This document discusses factor analysis, a statistical technique used to reduce a large number of variables into a smaller number of factors. It describes the key uses of factor analysis such as scale construction, establishing antecedents, and marketing ...
Examples of Factor Analysis. Here are some real-time examples of factor analysis: Psychological Research: In a study examining personality traits, researchers may use factor analysis to identify the underlying dimensions of personality by analyzing responses to various questionnaires or surveys. Factors such as extroversion, neuroticism, and conscientiousness can be derived from the analysis.
The basic model for a factor analysis is of the form ; 28 RELATION BETWEEN FACTOR ANALYSIS AND CMB MODELS . Receptor data (given) Target to be estimated. Source profile (given) Sjk. Cik. aij. Factor score. Factor loading. 29 Application of Factor Analysis-Multiple Regression for PM Source Apportionment in Bangkok urban area in 1996 30 Objectives
View FACTOR ANALYSIS.PPT from MARKETING C123 at IBS Hyderabad. Goal of factor analysis To identify constructs or 'factors' that explain the correlations among a set of variables To test hypothesis ... Read the following case study, Q&A. Differential Item Functioning (DIF) is a method used to: validate multicultural assessments aid in data ...
Jason Packer. This document introduces factor analysis, which is a statistical approach used to analyze relationships among large numbers of variables and explain them in terms of underlying common dimensions or factors. It provides examples of variables that could load onto factors related to online shopping experiences, hospital selection ...
Blue Case Study PowerPoint Template. The Blue Case Study PowerPoint Template offers a sleek and modern design, perfect for various presentations. Designed meticulously, this 18-slide multipurpose template allows users to easily edit graphics and texts. It's user-friendly, simply drag and drop pictures into placeholders.
Latent variables analysis is an important part of psychometric research. In this context, factor analysis and other related techniques have been widely applied for the investigation of the internal structure of psychometric tests. However, these methods perform a linear dimensionality reduction under a series of assumptions that could not always be verified in psychological data. Predictive ...
Factor Analysis.ppt. Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables.
The construction of ore pass systems in underground mines is a high-risk activity, especially in an environment with incompetent rock mass. This study aims to investigate the optimal method for ore pass construction in incompetent rock masses. We evaluated the conventional and raise boring (RB) methods based on safety, efficiency, excavation control, and ground support for ore pass ...
The incidence of recurrent hernia after radical resection of prostate cancer is high, so this article discusses the incidence and risk factors of inguinal hernia after radical resection of prostate cancer. This case control study was conducted in The First People's Hospital of Huzhou clinical data of 251 cases underwent radical resection of prostate cancer in this hospital from March 2019 to ...
Module 10 - Factor Analysis. jamovi is a compelling alternative to costly statistical products such as SPSS and SAS. jamovi is made by the scientific community, for the scientific community. This presentation explains the process of computing the confirmatory and exploratory factor analysis with the use of the jamovi software.