Development, Validity, and Reliability of a Scale for Exam Preparation Strategies Among Students

AUTHORS

Hamid Balochi 1 , * , Mehdi Lesani 2 , Hossein Motaharinejad 3

1 MSc Student of Educational Research, Department of Educational Sciences, Faculty of Literature and Humanities, Shahid Bahonar University, Kerman, IR Iran

2 PhD in Educational Management, Associate Professor, Department of Educational Sciences, Faculty of Literature and Humanities, Shahid Bahonar University, Kerman, IR Iran

3 PhD in Educational Management, Assistant Professor, Department of Educational Sciences, Faculty of Literature and Humanities, Shahid Bahonar University, Kerman, IR Iran

How to Cite: Balochi H, Lesani M, Motaharinejad H. Development, Validity, and Reliability of a Scale for Exam Preparation Strategies Among Students, Strides Dev Med Educ. 2017 ; 14(1):e59226. doi: 10.5812/sdme.59226.

ARTICLE INFORMATION

Strides in Development of Medical Education: 14 (1); e59226
Published Online: May 31, 2017
Article Type: Research Article
Received: November 8, 2016
Revised: January 17, 2017
Accepted: March 28, 2017
Crossmark

Crossmark

CHEKING

READ FULL TEXT
Abstract

Background and Objectives: The aim of the present study was to introduce a valid and reliable scale for the assessment of exam preparation strategies among students at Shahid Bahonar University of Kerman, Iran during the academic year 2015 - 2016.

Methods: In this descriptive exploratory research, a 25-item scale was developed based on a Likert scale in accordance with the literature.Face validity of the scale was confirmed, based on the comments of educational sciences experts. Three reliability indices, composite reliability, construct reliability, and internal consistency, were calculated. In addition to confirmatory factor analysis, convergent and divergent validities were determined.

Results: The results of exploratory factor analysis indicated 2 underlying constructs: 1) deep exam preparation strategies, including 12 items (coefficient, 0.60 - 0.80; specificity, 12.4); and 2) shallow exam preparation strategies, including 13 items (coefficient, 0.61 - 0.76; specificity, 2.15). Cronbach’s alpha was 0.94 for the first underlying construct and 0.92 for the second construct. In addition, the convergent validity coefficients ranged from 0.50 to 0.57, thus confirming the validity of the constructs. Moreover, the average variance extracted (AVE) of the constructs was higher than the squared correlation of the constructs; therefore, the divergent validity of the scale was confirmed.

Conclusions: The present scale for exam preparation strategies consisted of 2 constructs (deep and shallow approaches) and 25 items (deep approach, 12 items; shallow approach, 13 items). According to the analyses, the reliability and validity of the scale were confirmed. Therefore, this scale can be applied by instructors and students to evaluate exam preparation strategies.

Keywords

Development Validity Reliability Assessment Tool Student Exam Preparation Strategies

Copyright © 2017, Strides in Development of Medical Education. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited.

1. Background

Improving student performance is one of the main goals of educational facilities. The success of each educational program is dependent on a variety of factors, including exam preparation or study skills (1). Generally, students use different strategies to achieve different goals and to prepare themselves for exams in different situations (2). Therefore, reading and studying require knowledge and skill acquisition, while inadequacy and incompetence can lead to major problems for students. Overall, students who have thorough knowledge of these skills apply effective strategies in accordance with the study objectives and content (3).

In reading, which is a complex activity, no single method is applicable to all situations, and a combination of different techniques and methods should be applied for exam preparation (4). Accordingly, students should employ different strategies for different types of assignments (5). However, most students lack adequate knowledge of study skills; even talented and competent students may face academic problems due to inadequate learning skills (6).

Researchers have defined study skills as a strategy for coding, storing, recalling, and using information in a rational and effective manner (7, 8). Students who fail to achieve acceptable results in exams normally assume that they can succeed without effective studying (9). Accordingly, despite the great impact of intelligence, motivation, personal characteristics, and educational quality on academic success, learning strategies can also influence students’ exam preparation and learning efficiency (10). On the one hand, in educational settings, students’ inadequacies can produce negative consequences and influence the irintellectual capabilities and mental health. On the other hand, by improving students’ learning abilities and skills, many deficiencies can be mitigated, and students’ motivation can be improved (11, 12).

Researchers have identified three general learning approaches: the deep, surface, and achieving approaches (13-15). Biggs and Moore (1993) conceptualized each of these approaches as a combination of motivation and strategy (9). The surface approach to learning focuses onachieving course requirements using the minimum amount of effort. In this approach, students show no interest or engagement in the subject matter. Due to lack of internal motivation, this strategy is known as the surface approach, associated with mere memorization.

In the surface approach, students only memorize the materials, without demonstrating any desire to understand their actual meaning; in fact, only external incentives and success are important for the students using this approach (13, 16). Unlike the surface approach, the deep approach to learning is based on intrinsic motivations and personal interest. In this approach, students seek meaning in the subject matter and try to understand the logical relation between the content and its meaning (13, 16).

Generally, Biggs believes that the surface approach to learning encourages students to learn with minimum engagement; indeed, students aim to achieve the course requirements with minimum effort. In addition, surface learning emphasizes the reproduction of content rather than seeking meaning. Surface learning lacks analytical thinking and students do not engage in tasks or assignments; therefore, their learning quality reduces. By contrast, strategies in the deep approach include concentration on analytical understanding of the content, and students, in order to succeed, use active strategies, such as linking previously learned content with new materials and active engagement with the content (17).

In Iran, there are still no reliable standard tools for measuring study strategies among students. Only some studies have designed and introduced questionnaires as needed.In this regard, Fathabadi and Seif performed a study to investigate students’ approaches and study skills. They first examined surface and deep learning approaches and then introduced the strategies within these approaches. Finally, a 40-item scale was designed, consisting of 2 constructs (deep and surface approaches) to assess exam preparation strategies (18).

Dehghan and Soltan Gharaei developed a 15-item questionnaire, consisting of the following 5 constructs, each containing 3 items: time management, concentration, note-taking, reading ability, and test ability (1). Furthermore, Shakournia et al. designed a 28-item scale to determine students’ exam preparation strategies. In this scale, 14 items were related to deep strategies and 14 items were attributed to surface strategies (19).

Moreover, Ghanbari et al. developed a questionnaire on exam preparation strategies, which included four constructs – planning, assignment, repetition/review, and learning style/self-reflection – to identify students’ approaches to exam preparation (4). Furthermore, Yusefi Afrashte et al. designed a two-construct questionnaire on students’ exam preferences. One part of the questionnaire was related to students’ exam preferences, whileanother part focused on teachers’ attitudes (20).

In light of researchers’ extensive use of learning components in higher education, especially in relation to students’ approaches towards studying and learning and the selection of different strategies for academic success, it is necessary to develop proper and standardized tools for identifying students’ preferences. Evidently, the use of appropriate tools in accordance with the academic environment can be useful.

With this background in mind, this research aimed to design and evaluate a suitable tool for the assessment of learning approaches in higher education. Therefore, this study is of both theoretical and practical significance. In light of the literature and theoretical principles, the following questions will be explored in this study:

- What are the underlying constructs of the scale of exam preparation strategies?

- Can the constructs be verified?

- What are the validity indices?

- What are the reliability indices?

2. Methods

This is a research and development study, which aims at designing and evaluating an educational product (21). The scale of exam preparation strategies was developed in the following stages:

1. By reviewing and analyzing the literature and viewpoints of experts in educational sciences, the primary scale was constructed with 28 items on a 5-point Likert scale (excellent, good, relatively good, weak, and very weak). Eight instructors of educational sciences studied the scale in terms of homogeneity and relevance of the items and confirmed its face validity.

To measure the reliability and validity of the presented scale, undergraduate students of the Shahid Bahonar University of Kerman (n, 6000) were randomly selected via cluster sampling from 3 faculties: literature and humanities, engineering, and mathematics and computer sciences. Using Cochran’s formula, the sample size was determined at 348 and the required data were gathered. Of the 348 participants in this study, 192 (55.2%) were male and 156 (44.8%) were female.

2. After collecting the required information, the correlations of the items was evaluated. Three items were eliminated given their correlation coefficients of less than 0.3. For the remaining items, Cronbach’s alpha coefficients were reported to be desirable. Before exploratory factor analysis, the Kruit-Bartlett test was carried out. The scree plot was drawn for determining the underlying constructs.

3. In terms of validity, an exploratory factor analysis was performed to confirm, correct, or reject the extracted variables. Confirmatory factor analysis, as well as three indices (internal consistency, construct reliability, and composite reliability), was applied to determine the reliability of the scale. In addition to confirmatory factor analysis, convergent and discriminant validities were measured.

3. Results

After analyzing the accuracy of the data, the research hypotheses were explored:

Question 1: What are the underlying constructs of the exam preparation scale?

Exploratory factor analysis was first applied, as reliability precedes validity assessment. Therefore, evaluating the correlation of each item with the total scale and measuring Cronbach’s alpha coefficient (if an item is removed) is necessary; items with correlation coefficients < 0.3 are removed (22). In this study, since the correlation coefficients of 3 items were below 0.3, they were removed from the scale, and Cronbach’s alpha was reported to be acceptable for the remaining items (Table 1).

Table 1. Kaiser-Meyer-Olkin (KMO) and Kruit-Bartlett Test Results
KMOBartlett’s TestDegree of FreedomSignificance
0.9536.033000.001

The Kaiser-Meyer-Olkin (KMO) value was 0.953 and the significance level of Kruit-Bartlett test was less than 0.001. Based on both tests, the implementation of factor analysis can be justified.

3.1. The Underlying Constructs

Different criteria can be applied for determining the constructs of factor analysis, including scree plot (Figure 1).

The Scree Plot of a 25-Item Exam Preparation Scale
Figure 1. The Scree Plot of a 25-Item Exam Preparation Scale

The scree plot indicated two acceptable constructs; therefore, two underlying constructs were extracted for exam preparation strategies.

3.2. Factor Structure

Table 2 presents the extracted constructs and items from the exploratory factor analysis.

Table 2. The Final Extracted Items after Removing Flawed Items
NumberItemsConstructsContribution
12
1I try to understand the content and meaning of materials for the exam.0.790.70
2I prepare myself before the exam.0.800.71
3I take notes while learning.0.750.69
4I try to learn the materials in a logical and understandable way.0.760.70
5To gain a better understanding, I also read other references and relevant sources.0.620.56
6I do not stop studying until I have fully understood the subject.0.710.63
7I prepare for the exams gradually and consistently throughout the term.0.720.57
8After studying, I try to form an understandable and comprehensive image of the subject.0.790.68
9I take notes on the subjects while preparing for the exam.0.720.55
10To learn better, I try to develop questions from the study subjects.0.700.54
11I try to complete assignments during the term.0.670.48
12In the final exams, I prefer exploratory questions.0.600.57
13I only study and highlight important subjects for the final exam.0.700.60
14I try to memorize the material for the exam.0.760.59
15I mostly study the night before the exam.0.640.62
16I select important subjects for memorization.0.740.63
17I avoid irrelevant subject matter or unnecessary descriptions.0.640.50
18I skip some subjects while preparing for the exam.0.680.56
19I only devote my time to subjects that are important for the exam.0.700.63
20While studying, I only concentrate on important subjects that are included in the exam.0.730.63
21I only study to get a passing grade.0.610.47
22I try to read the questions of previous exams set by the same teacher.0.660.58
23I prefer to organize the materials rather than memorize them.0.630.55
24I usually stay up the night before the exam.0.630.55
25I prefer multiple-choice questions.0.530.41
Specificity12.42.15
Variance percentage42.628.61

All the items were significantly correlated with their underlying construct. Considering the sample size of the study, factor loadings above 40% were considered significant for the items. Based on the findings, none of the items had a factor loading of less than 50%. Therefore, they were all correlated with their underlying latent construct. Overall, 13 items were related to one construct and 12 items were attributed to another.

In Table 2, the last column represents the contribution of each item. As can be seen from this table, 70% of the total variance could be explained by the items. The last two rows also represent the specificity and variance percentage. Specificity explains a proportion of the total variance of all variables in a construct. The variance percentage also shows the variance in percentage. These two indicators demonstrate the contribution of each item to the scale.

3.3. Construct Designation

The constructs were designated by identifying common meanings and content among the items of each construct and then homogenizing them. In addition, the latent content of the items was determined through the literature review. Items 1 - 12 were attributed to deep learning strategies, while items 13 - 25 were related to surface strategies. Finally, two underlying constructs, surface and deep strategies of exam preparation, were identified.

Question 2: Can we confirm the extracted structure?

Lisrel was used to evaluate the developed model. Two types of analyses were carried out, including specific and overall goodness of fit. The specific assessment was related to the paths drawn from the latent constructs to the indicators. In the overall assessment, several goodness-of-fit indices were used.

3.4. Confirmatory Factor Analysis

Table 3 presents the correlation between the latent constructs and the corresponding items.

Table 3. The Analysis of the Modeland Itemsa
StructureItemsStandard FactorT-ValueR2Cronbach’s AlphaComposite Reliability
Deep exam preparationItem 10.8318.800.690.940.96
Item20.8318.950.70
Item 30.8419.020.70
Item 40.8419.030.71
Item 50.6914.740.48
Item 60.7716.820.59
Item 70.7115.180.50
Item 80.7716.920.61
Item 90.6613.560.43
Item 100.6513.360.43
Item 110.7516.080.56
Item 120.6714.660.45
Surface exam preparationItem130.7114.830.500.920.95
Item 140.6212.470.39
Item 150.7917.230.62
Item160.7816.890.61
Item 170.6613.450.43
Item 180.6814.030.47
Item 190.7616.190.57
Item 200.7515.880.56
Item 210.6713.740.45
Item 220.6212.350.38
Item 230.7516.040.57
Item 240.6914.160.47
Item 250.5711.870.33

aX2, 514.75; Df, 251; P, 0.000; X2/Df, 2.05; RMSEA, 0.055; GFI, 0.90; AGFI, 0.87; IFI, 0.99; NFI, 0.98; CFI, 0.99.

Evaluation of the correlation between each item and its underlying construct showed a t-value of > 2 for all the items, indicating the significance of correlations and applicability of the model for the evaluation of specific indices. For confirmation, overall goodness-of-fit indices were also measured. These indices were calculated using the maximum likelihood estimation method.

To evaluate the overall goodness of fit in the model, the Chi square test was used. However, it should be noted that this index is greatly influenced by sample size. On the one hand, with large sample sizes, acceptable fit indices are generally indicated. On the other hand, if the sample size is limited, it is not possible to assess the model strengths and weaknesses (23). Accordingly, the Chi square was applied and degree of freedom was measured to minimize the effect of sample size on the indices (values < 3 are optimal) (24).

The root mean square error of approximation (acceptable model fit < 0.06) (25), goodness of fit, and adjusted goodness of fit represent the relative variances and co-variances in the model; values close to 1 (> 0.9) indicate acceptable model fit (26).The comparative fit index, normed fit index, and incremental fit indexeach have an acceptable range of > 0.9; values above 0.95 present acceptable model fit (27).

To develop the desired model, several error co-variances were allowed for the items, although limitations in the literature were taken into account. In consistence with the exploratory factor analysis, specific and overall analyses confirmed the results of the first stage, and the overall goodness-of-fit indices indicated favorable results.

Question 3: What are the reliability indices?

Construct reliability is the degree to which a test measures what it claims to be measuring. The test developer, based on a hypothesis, makes inferences about a variable and predicts the relevance and applicability of test scores in different situations. If the analysis confirms the predictions, construct reliability is approved. Otherwise, three possibilities arise: 1) flawed design of the test; 2) inaccuracy of the hypothesis and need for revisions; and 3) failure to measure the desired features in the test (28).

Three different reliability measures – internal consistency, construct reliability, and composite reliability– were applied in this study. Cronbach’s alpha coefficient was calculated to evaluate the internal consistency of the scale. This measure is appropriate for evaluating the internal consistency of the items (acceptable range > 0.7) (22). Based on the findings, Cronbach’s alpha coefficient was acceptable in our study, thus confirming the internal consistency of the scale.

Additionally, construct validity is confirmed when the factor loading of the items is significant and t-value is above 2. In the present study, t-values were acceptable and significant for the items. Finally, composite reliability was measured, which evaluates the adequacy of items related to a latent construct; the acceptable value for composite validity is 0.7 (22).

The last two columns of Table 3 represent the internal consistency and composite reliability of the scale respectively. Based on the analysis of reliability measures, the reliability of the questionnaire was confirmed.

Question 4: What are the validity indices?

In addition to confirmatory factor analysis, convergent and discriminant validities were measured. Convergent validity refers to the extent to which indicators describe a latent variable. Moreover, it determines the extent to which items related to an underlying construct actually measure the construct. There are two major criteria for the analysis of convergent validity:

1) The factor loading of the items should be above 0.5 or 0.7 (optimal) (29). However, some studies have considered lower factor loadings (0.35) (30, 31). In the present study, the standard factor loading was 0.5.

2) The average variance extracted (AVE) of each construct should be higher than 0.5.AVE is a measure of mean variances, determined by the sum of squared factor loadings of each item (31). Table 3 indicates the significance of factor loadings, thereby confirming convergent validity.

Moreover, discriminant validity is the extent to which factors are distinct and uncorrelated. Lack of discriminant validity indicates that a variable belongs to two constructs (cross loading). Discriminant validity is confirmed if AVE is higher than the squared correlation of two latent variables (31).Table 4 presents the results of discriminant validity.

Table 4. The Squared Correlations Between the Constructs and AVE for Each Construct
Constructs12
Deep exam preparation approach0.57-
Surface exam reparation approach0.0030.50

As presented in Table 4, convergent validity can be confirmed based on the mean variance estimations; overall, values above 0.5 are favorable in this measure. In addition, the AVE ranged from 0.50 to 0.57, and convergent validity was confirmed. The AVE for each factor was higher than the squared correlations, thus confirming discriminant validity between the constructs. Convergent validity was confirmed based on the significance of factor loadings and composite reliability of > 0.7. In addition, discriminant validity was confirmed considering the higher AVEs for each factor,compared to the squared correlations.

4. Discussion

The aim of the present study was to develop a scale for the assessment of exam preparation strategies and to determine its validity, reliability, and underlying constructs. For this purpose, a 28-item scale was developed in accordance with the literature and was applied to 348 subjects. The reliability of the questionnaire was then determined by Cronbach’s alpha, and the correlation of each item with the total scale was confirmed. Based on the exploratory factor analysis, deep and surface strategies were introduced as the main constructs of the scale; these constructs were confirmed based on the confirmatory factor analysis. In total, 25 out of 28 items in the primary scale were related to the identified constructs.

The present findings are in line with studies by Fathabadi and Seif (18), Soltanalgharaei (1), Shakournia et al. (19), Yosefiafrashteh et al. (20), and other studies using three approaches to learning (deep, surface, and achieving approaches) (13, 14). McGregor and Elliot conceptualized these approaches as a combination of motivation and strategy (9). It should be noted that none of the discussed studies in Iran have comprehensively evaluated the reliability or validity of scales for exam preparation strategies. However, in the present study, the designed scale was thoroughly examined, and therefore, it can be effectively applied in future research.

4.1. Conclusions

In the present study, different methods and criteria were used to evaluate the reliability and validity of the scale. Three reliability indices were measured: internal consistency, construct reliability, and composite reliability. Based on the results, the scale was found to be reliable. Overall, reliability of an instrument and its constituent elements is the first step for its validation, as we cannot depend on an unreliable index. Considering the favorable results, the reliability of the scale can be confirmed. Moreover, different indices were applied for the evaluation of validity including factor validity, discriminant validity, and convergent validity. Based on the findings, two validity indices –discriminant and convergent validity were found to be favorable. Therefore, it can be concluded that the constructs of the scale are both valid and reliable.

4.2. Suggestions

Based on the present findings, researchers, students, and experts can use the developed scale in projects, dissertations, and research studies in higher medical education. Moreover, the constructed scale is suitable for evaluating students’ strategies for exam preparation.

4.3. Limitations

1) The results of this study are exclusive to the academic year 2015 - 2016.

2) There are no recent or new studies about exam preparation strategies in Iran.

Acknowledgements

References

  • 1.

    Soltanalgharaei KH. Relationship of study skills and exam preparation method in master students. Educ Strateg Med Sci. 2014; 7(1) : 51 -6

  • 2.

    Azizian M, Abedi M. An investigation of the changing pattern of reading errors in students 2nd to 5th grades of primary schools. Stud Educ Psychol. 2007; 8(1) : 101 -14

  • 3.

    Hasan AR, Zahra K. Students’ familiarity with reading methods: A literature review. Ketab-e Mah-e Kolliyat. 2009; 12(11) : 70 -3

  • 4.

    Ghanbari S, Ardalan MR, Karimi I. Effect of the Challenges of Student earnings Evaluation on Deliberate Practice Study Approach. Educ Strategy Med Sci. 2015; 8(2) : 105 -13

  • 5.

    Mehdinezhad V, Esmaeeli R. Students' approaches to learning superficial, strategic and deep. Educ Strategy Med Sci. 2015; 8(2) : 83 -9

  • 6.

    Gettinger M, Seibert JK. Contributions of study skills to academic competence. School Psychol Rev. 2002; 31(3) : 350

  • 7.

    Chen ML. Influence of grade level on perceptual learning style preferences and language learning strategies of Taiwanese English as a foreign language learners. Learn Individ Differ. 2009; 19(2) : 304 -8 [DOI]

  • 8.

    Heller ML, Cassady JC. Predicting Community College and University Student Success. J College Student Retent Res Theory Pract. 2016; 18(4) : 431 -56 [DOI]

  • 9.

    Tsai CY, Li YY, Cheng YY. The Relationships Among Adult Affective Factors, Engagement in Science, and Scientific Competencies. Adult Educ Q. 2016; 67(1) : 30 -47 [DOI]

  • 10.

    Thibodeaux J, Deutsch A, Kitsantas A, Winsler A. First-Year College Students’ Time Use. J Adv Acad. 2016; 28(1) : 5 -27 [DOI]

  • 11.

    Yip MCW. Differences in Learning and Study Strategies between High and Low Achieving University Students: A Hong Kong study. Educ Psychol. 2007; 27(5) : 597 -606 [DOI]

  • 12.

    Abd KMS, Seyf AA, Karimi Y, Biabangard E. Making and normalization of the academic motivation scale in the male high school students in Mashhad and the effect of instruction of the study skills on the motivation. Stud Educ Psychol. 2008; 18(1) : 5 -20

  • 13.

    Bergey BW, Deacon SH, Parrila RK. Metacognitive Reading and Study Strategies and Academic Achievement of University Students With and Without a History of Reading Difficulties. J Learn Disabil. 2017; 50(1) : 81 -94 [DOI][PubMed]

  • 14.

    Chevalier TM, Parrila R, Ritchie KC, Deacon SH. The Role of Metacognitive Reading Strategies, Metacognitive Study and Learning Strategies, and Behavioral Study and Learning Strategies in Predicting Academic Success in Students With and Without a History of Reading Difficulties. J Learn Disabil. 2017; 50(1) : 34 -48 [DOI][PubMed]

  • 15.

    Tuan NM. Learning Approaches in Relation with Demographic Factors. VNU J Sci Educ Res. 2015; 31(2) : 27 -39

  • 16.

    Biggs J. Individual differences in study processes and the Quality of Learning Outcomes. High Educ. 1979; 8(4) : 381 -94 [DOI]

  • 17.

    Biggs J. Aligning teaching and assessing to course objective. Teaching and learning in higher education: new trends and innovations. 2003;

  • 18.

    Fathabadi J, Saif A. Investigating the effects of type of assessment (essay and multiple-choice) on students'approaches to studying and exam preparation strategies between students with high and low academic achievement. J Educ Psychol. 2008; 14(4) : 21 -46

  • 19.

    Shakurnia A, Ghaforian Borojerdnia M, Elhampour H. Approaches to study and learning of students in Ahvaz JundiShapur University of Medical Sciences. Jundishapur Sci Med J. 2013; : 1021

  • 20.

    Yosefiafrashteh M, Siami L, Rezaie A. Investigate the relationship between classroom assessment practices and preferences survey students with their learning approaches. Educ Measure. 2014; 5(17) : 125 -48

  • 21.

    Sarmad Z, Hejazi E, Bazargan A. In behavioral science research methods. 2015;

  • 22.

    Bazargan A, Dadras M, Yosefiafrashteh M. Developing, establishing the reliability and validating a measurement tool for measuring quality academic services to students. J Res Plan High Educ. 2014; 1(72) : 73 -97

  • 23.

    Raykov T, Marcoulides GA. A first course in structural equation modeling. 2012;

  • 24.

    Haghshenas L, Abedi MR, Baghban I. Standardization, validity and reliability Job Strong interest among high school students, vocational, work and pre-university students in Isfahan. Counsel Res Dev. 2009; 7(28) : 95 -116

  • 25.

    Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structur Equation Model Multidisciplinar J. 1999; 6(1) : 1 -55 [DOI]

  • 26.

    Hooman HA. Multivariate data analysis in behavioral preceding studies. 2002;

  • 27.

    Hooman HA. Structural equation modeling using LISREL software. 2011;

  • 28.

    Kabiri M. A Comparison between Alpha Coefficient and Structural Equation Modeling Methods to Estimation of Reliabilityà. J Psychol. 14(1) : 39 -61

  • 29.

    Bollen K, Lennox R. Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin. 1991; 110(2) : 305 -14 [DOI]

  • 30.

    Papanastasiou EC. Factor structure of the attitudes toward research scale. Statistics Educ Res J. 2005; 4(1) : 16 -26

  • 31.

    Tenenhaus M, Vinzi VE. PLS regression, PLS path modeling and generalized Procrustean analysis: a combined approach for multiblock analysis. J Chemometrics. 2005; 19(3) : 145 -53 [DOI]

  • COMMENTS

    LEAVE A COMMENT HERE: