Psychometric Evaluation of the Adequacy of the Teaching Performance Evaluation Questionnaire in Urmia University of Medical Sciences, Urmia, Iran, 2015

AUTHORS

Aram Feizi 1 , Parivash Mohammadlou 2 , Leili Salehi 3 , *

AUTHORS INFORMATION

1 Associate Professor, Patient Safety Research Center, Urmia University of Medical Sciences, Urmia, Iran

2 Ms in Medical Education, Iran University of Medical Sciences, Tehran, Iran

3 Associate professor, Research Center for Health, Safety and Environment, Health Education and Promotion Department, Alborz University of Medical Sciences, Karaj, Iran

ARTICLE INFORMATION

Strides in Development of Medical Education: 14 (2); e66275
Published Online: September 27, 2017
Article Type: Research Article
Received: January 30, 2017
Revised: April 30, 2017
Accepted: April 30, 2017
Crossmark

Crossmark

CHEKING

READ FULL TEXT
Abstract

Background: Assisting teachers to modify and improve their method of teaching is among the main goals of teachers’ evaluations. The current study aimed to psychometrically evaluate the teaching performance evaluation questionnaire in Urmia University of Medical Sciences, Urmia, Iran.

Methods: The original 28-item scale was scored based on a Likert scale obtained from former studies and was translated into Persian after obtaining permission from the designer. Then, the impact item score, content validity index (CVI), and content validity ratio (CVR) of the questionnaire were assessed by 11 experts, and its structural validity was also evaluated using exploratory factor analysis. The reliability of the scale was also assessed by its internal consistency and test-retest reliability.

Results: The reference version included 28 items, 23 of which were translated into Persian based on the impact factor 1.5, CVI > 0.59, and CVR > 0.70. Based on the exploratory factor analysis, the final version of the questionnaire included 23 items, and 3 factors were extracted. The scale could predict 51% of the total changes.

Conclusions: Results of the current study indicated refined structure factor and good reliability for the psychometric adequacy of the teaching performance evaluation questionnaire. The results of the current study can be used by universities as well as other educational institutes to evaluate teachers’ adequacy.

Keywords

Validity Reliability Psychometrics Teachers’ Evaluation Teachers

Copyright © 2017, Strides in Development of Medical Education. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited.
1. Background

Determining the success rate of faculty members achieving their educational goals is called teachers’ evaluation (1). Assisting teachers to modify and improve their teaching strategies is among the main goals of teachers’ evaluation. Evaluation of teachers by students is the most common method to evaluate teachers (2) and is also one of the most complicated evaluation methods owing to its low validity, inaccuracy of available tools, and method of implementation; the most important point is that such scales cannot provide accurate and specific information (3); a correct tool should be valid, acceptable, and cost-effective. In fact, evaluation needs objective and subjective instruments and should employ quantitative and qualitative methods (4).

Owing to the extensive evaluation of teachers by their students, there remain some pitfalls in this approach. Ghazi Tabatabai and Yousefi Afrashteh stated in this regard, “It seems that scores given by students are under the influence of teaching methods, students’ satisfaction as well as their attitudes toward educational course, students’ personality and their sociopsychological requirements” (5). Although most universities worldwide attempt to evaluate their teachers through the students, most teachers and students are not satisfied with the validity and appropriateness of such tools and call them useless (6).

Results of a study by Ziaee et al. showed that only 12% of students considered the questions raised in such scales to be good criteria to evaluate teachers’ performance; on the other hand, only 20% of faculty members believed that such questions reflect all their activities and efforts (7). Based on the results of similar studies, Aliasgharpour et al. reported that evaluation of teachers by students can only indicate good validity and reliability if it benefits from multidimensional assessment methods as well as proper design (8).

Javaherizadeh indicated that the content of questions in the evaluation scale is the most important and problematic part of the evaluation process from the viewpoint of teachers; 88% of teachers believe that some questions in the scale do not reflect their teaching activities properly as they are comprehensive and subjective procedures (9).

Accordingly, Emdadi et al. determined the reliability and validity of teacher evaluation forms given to the students in theoretic lessons. Their assessment tool was a 14-item questionnaire evaluated based on content validity index (CVI) and content validity ratio (CVR); its reliability was also assessed using Spearman’s correlation coefficient (P = 0.456). Results of their study indicated that the evaluation scale did not have adequate reliability; their results showed that, although the teacher evaluation forms registered in the Sama system of Hamadan University of Science, Hamadan, Iran, conceptually benefit from acceptable validity, they are poor in internal consistency. The reliability of evaluation forms is of great importance. Hence, a proper tool for teachers’ evaluation with the minimum possibility of personal interest and involvement of students should be developed without rushing to any hasty judgment (3).

Although teachers emphasize inappropriate contents of evaluation forms (10), using these forms is popular in all Iranian universities.

Using various evaluation forms in universities, unknown designing strategy and method of evaluation, and unknown validity and reliability of different forms led the current study authors to use and standardize the evaluation of the teaching performance questionnaire developed by Moreno-Murcia et al., provided by a comprehensive review of all teachers’ evaluation scales used in Spanish universities and comments of outstanding professors (11), whose validity and reliability were confirmed in Spain (10).

2. Methods

The current study aimed to psychometrically evaluate the teaching performance questionnaire administered to 134 medical students of Urmia University of Medical Sciences, Urmia, Iran, in 2015. To estimate sample size, a 1:5 ratio was used, and a total of 150 questionnaires were distributed among the students. Sixteen were excluded because of incomplete data. For the psychometric analysis, after obtaining permissions from the questionnaire designer, the translation and back translation processes were performed as follows: the 28-item teaching performance questionnaire was translated into Persian by 2 translators with good command of both English and Persian. Then, the questionnaire was retranslated into English by 2 other translators, and the pitfalls between the original and retranslated versions were managed (11).

The study population comprised students studying medicine at Urmia University of Medical Sciences in the academic year of 2014 - 15. To collect data, convenience sampling was used. The content and formal validity of the questionnaire were analyzed, assessed, and modified using the comments of 11 experts familiar with medical sciences. In addition, a pilot study was also conducted on 15 students, and the reliability of the questionnaire was assessed. Then, the questionnaire was distributed among the participants, and its structural validity and internal consistency reliability were assessed. Moreover, the students recruited in the pilot study were excluded from the target population.

To determine the quantitative validity of the questionnaire, the formal (qualitative and quantitative), content, and structural (the exploratory factor analysis) validity were used, and a total of 11 participants were enrolled in the quantitative analysis. To determine the qualitative formal validity of the questionnaire, the items (5 items) were qualitatively modified. To determine the quantitative formal validity, in order to calculate the impact factor index of items, an inventoryincluding all questionnaire items was given separately to 11 participants in the target group. The impact factors were calculated, and IFs > 1.5 were considered as acceptable and maintained for further analyses.

To qualitatively analyze the content validity, variables such as compliance with Persian language, grammatical rules, using proper words, placing the items in the right places, appropriate scoring, allocation of adequate time to complete the questionnaire, and fitness of the selected domain were considered. Hence, the items were repeatedly reviewed, and necessary modifications were made.

To evaluate the validity of the questionnaire, the quantitative content reliability, CVR, and CVI were used.

CVR was measured using Equation 1 as follows:

Equation 1.

where ne is the number of experts responding to the necessary option and N is the total number of experts. Then, the result was compared with that of the Lawshe table, and CVR = 0.59 for 11 experts.

CVI indicated the comprehension of judgments about reliability or feasibility of the model, test, or final instrument. The higher the content validity, the more the CVI tended toward 0.99 and vis versa. The expert’s panel was used to evaluate if the items of the questionnaire were well-designed to measure the variables. Hence, 3 criteria of simplicity and fluency, relevance, and clarity were scored based on a 4-option Likert scale for each item.

According to the results of the current study, CVI for each item was measured based on the number of experts who chose options 3 or 4 divided by the total number of experts (Equation 2).

Equation 2.

CVIs > 0.79 were considered as acceptable, 0.70 - 0.79 controversial and requiring revision, and < 0.70 unacceptable.

The structural validity was evaluated in the current study by the exploratory factor analysis using the Kaiser-Meyer-Olkin (KMO) test, the Bartlett test, screen-plot figure, special value, and varimax rotation. Using these methods, the items with high correlation were categorized in the same factor. To assess the reliability of the questionnaire, the revised version was distributed among the study participants, and the extraction of data from completed questionnaires and Cronbach’s alpha index of the questionnaire and each item separately were measured.

To observe ethical considerations, the interviewer introduced himself, holding an introduction letter, and explained the study goals and objectives as well as the method of completing the questionnaire to the participants and assured them about the confidentiality of data.

3. Results

The mean age of the students was 21.68 years; 44.3% were female with the mean grade point average of 15.86.

Results of a formal validity analysis showed that 5 items regarding the performance of teachers qualitatively required revision. All items were quantitatively maintained. The content validity was also analyzed using CVI and CVR.

CVR results relied on the assessments of 11 experts and were compared with the criteria in the Lawshe table. According to this table and based on the number of participants (n = 11) and the mean CVR = 0.59, items with CVR > 0.59 and the average numerical judgments > 1.1 were maintained. According to the obtained results, all items were accepted and maintained (Table 1).

Table 1. The Content Validity Ratio and Content Validity Index of the Items
ItemCVRCVI
RelevanceSimplicityClarity
1- The teacher provides the minimum content associated with a topic based on the basic level of learners’ knowledge1.720.810.720.90
2- The teacher is easily accessible (lessons, e-mails, etc.)1.540.630.810.63
3- The teacher lets learners categorize and publish a part of course projects1.360.630.630.54
4- The teacher gives clear data regarding aims, references, education, contents, and evaluation methods in a certain curriculum21.001.001.00
5- The teacher gives learners information about the competency of students expect from teachers1.900.720.700.72
6- The teacher provides scientific knowledge to learners to get a better understanding of the issues1.901.000.810.90
7- The teacher provides the contents after expressing their important aspects in a logical platform1.900.900.900.81
8- The teacher promotes and facilitates the participation of learners1.901.001.000.90
9- The teacher promotes individual activities1.450.720.720.63
10- The teacher promotes teamwork1.720.810.810.81
11- The teacher relates teaching to the specific environment1.720.900.720.63
12- The teacher performs final evaluation in the classroom in addition to the early assessment of sessions and topic of the lesson1.360.720.900.40
13- The teacher promotes learners’ interests and encourages them to learn1.720.810.900.63
14- The teacher promotes the spirit of critique and research in learners1.811.000.901.00
15- The teacher facilitates teacher-learner and intra-learner interactions1.811.001.001.00
16- The teacher is present in the classroom and answers learners’ questions clearly1.720.810.900.81
17- The teacher adequately meets educational requirements of learners1.180.600.730.63
18- The teacher maintains genuine mutual student-teacher respect1.720.900.900.72
19- The teacher defines projects to learners to actively involve them in course tasks1.901.001.000.90
20- The teacher designs the curriculum based on the learners’ laboratory experiments1.630.720.810.63
21- The teacher significantly benefits from communications and information technologies1.900.900.80.8
22- The teacher has good command of the course1.900.800.810.81
23- The teacher links issue contents to those of other courses1.800.800.70.7
24- The teacher designs the curriculum to maintain class dynamics1.800.810.630.63
25- The teacher benefits from other education-facilitating references1.720.810.720.54
26- The teacher communicates satisfactorily with students1.630.810.810.81
27- The teacher designs and provides curriculum to promote competency in learners1.720.630.720.45
28- The teacher uses appropriate criteria matched with the curriculum to evaluate learners’ activities1.630.630.630.63

Abbreviations: CVI, Content Validity Index; CVR, Content Validity Ratio.

Based on CVI results, the items with CVI > 0.79 were maintained, and those with CVI 0.70 to 0.79 were modified. Therefore, 5 items (3, 12, 17, 27, 28) were removed, and 7 items (1, 4, 7, 11, 21, 22, 23) were modified. Item 23 was also accepted (Table 1).

3.1. Construct Validity

Sampling adequacy was assessed for exploratory factor analysis, and, accordingly, KMO was 0.849. In addition, the Bartlett test of Sphericity was 1083.798, and P values were significant at < 0.001. Hence, the basic conditions were available for exploratory factor analysis.

The main component analysis was used to extract factors involved in the current study, and the special value method was used to determine the number of factors. Owing to the special values > 1.5 (total squares of factor coefficients of loads in each factor and due to the reduced number of factors), 3 factors with 50.93% of total score variances had the eigenvalue of > 1 and expressed the variance of teaching performance of lecturers. In the current study, the varimax rotation was used to simplify data, and, accordingly, 3 areas were extracted. Based on the correlation matrix rotated in the items of the questionnaire, the items attributed to each factor were identified (Table 2).

Table 2. Factor Load of Each Item Based on the Varimax-Rotation
ItemsFactor
123
10.6970.0520.242
20.4660.331-0.0270
30.4270.4520.203
40.3610.4330.452
50.5850.2240.38
60.0370.7270.067
70.1810.4980.632
80.350.5120.020
9-0.0800.3720.772
100.2960.6590.181
110.5520.2590.418
120.5520.0640.515
130.6460.1440.552
140.6000.1380.203
150.7320.225-0.011
160.209-0.1380.664
170.3660.1580.164
180.3620.3760.118
190.6910.3220.144
200.4170.3670.392
210.5780.4420.239
220.2510.6560.155
230.5620.4860.166

Based on the correlation matrix rotated in the items of teaching performance questionnaire, items attributed to each factor were identified and named. To decrease the number of factors and align them with the criteria of the theory raised in the introduction of the current study and based on the varimax-rotated matrix, names of the components were compared to those made by the questionnaire designers, and, finally, the components were named for better understanding and alignment with theorists’ factors.

The variables with high internal consistency were placed in factors named planning, presenting, and conclusion (Table 3).

Table 3. The Final Extracted Factors, Their Names, and Attributed Items
FactorName of the FactorNumber of Attributing Items
1Planning6, 8, 10, 18, 22
2Presenting1,2,3,5,11,12,13,14,15,17,19,20,21,23
3Conclusion4, 7, 9, 16
4. Discussion

In the current study, to measure the reliability of the teaching performance questionnaire, the consistent reliability was analyzed by the interclass correlation coefficient (ICC), and, to determine the internal consistency reliability, the Cronbach’s alpha was used.

Internal consistency refers to the degree to which all items of a questionnaire are inter correlated and summarized in an index, and the Cronbach’s alpha is the most common method to measure it. In the current study, Cronbach’s alpha coefficient was 0.92 for the questionnaire and 0.71, 0.91, and 0.70 for the factors.

Consistent reliability also refers to test-retest in a certain period in a certain group. Scores of both test and retest were considered, and the ICC was measured.

The current study aimed to determine the psychometric adequacy of the teaching performance questionnaire owing to the importance of teachers’ performance evaluation. To determine the validity and reliability, the teaching performance questionnaire (11) was used.

After determining the face and content validity of the questionnaire, the exploratory factor analysis was used to measure the construct validity; using the varimax-rotation, 3 factors were extracted, which expressed 51% of total variance. In the study by Moreno-Murcia et al., 3 factors expressed 50.93% of variances-38.09% for the first (planning), 6.37% for the second (presentation), and 5.95% for the third (conclusion). Different planning issues suchas former thoughts and processes, curriculum design, management of courses, laboratories and trainings, management of predicated learning activities, evaluation criteria, methods, teaching contents, and references were included in the studied questionnaire (11). Course presentation included all issues attributed to presenting the course and compliance with curriculum, teaching and learning activities, and predictable educational activities as well as evaluation methods; moreover, conclusion relies on academic goals, students’ achievements in review and improvement of learning activities, exterior recognition of teaching tasks, and creating educational contents (12). Based on the findings of the current study, the method of course presentation and quality of teaching were the main factors of teachers’ evaluation; the importance of such factors was also emphasized in a study by Raoufi et al. (13). The importance of teaching quality was repeatedly emphasized alongside teachers’ evaluation in different studies.

Hosseini and Sarchami conducted a study in University of Qazvin, Iran, on the viewpoints of students toward the priorities of teachers’ evaluation and reported that teachers’ command of subject matter as well as teaching quality are the most important factors in the evaluation of teachers (14). In agreement with the results of their study, Shakournia et al. indicated that speech skills and teaching methods as well as knowledge, temper, and behavior of the teacher alongside teaching quality were the criteria to evaluate teachers, according to students’ viewpoints (15). Based on the findings of the current study, speech skills with the mean score of 4.05 were considered the most important factor in effective and successful teaching. In a study by Adhami et al., teacher’s command of subject matter and his/her ability to transfer knowledge to students were reported as the most important priorities in the evaluation of teachers, which was consistent with the results of the current study (16).

Ghorbai et al. showed in their study that teachers’ command of subject matter and speech skills were the most important priorities in the evaluation of teachers based on the ideas of teachers and students of University of Semnan, Semnan, Iran (17). Zohouri and Eslaminejad conducted a survey on the students of Kerman University of Medical Sciences and indicated teaching method as the main criterion in the evaluation of teachers followed by the ability of the teacher to communicate with students, being research-oriented, and moral and personal characteristics (18).

Bastani et al. evaluated the validity and reliability of teachers’ evaluation forms in theoretic and practical lessons completed at end of each semester by the students to evaluate faculty members of Tehran University of Medical Sciences, Tehran, Iran. The first form comprised 6-areas suitable teaching method, teacher’s good knowledge and command of subject matter, teacher-student attendance at class, teacher’s temper and behavior, teacher’s availability, and final comment of the student as well as 14 questions. The second form included 4 areas of quality of education: practical skills, teaching professional and moral rules, active presence, and teacher’s morality and manner (19). They evaluated the content reliability of the forms based on CVI and CVR values and reported the CVR coefficient > 0.29 for both forms based on the Lawshe table for 40 participants. Hence, all items were kept. In addition, the forms had acceptable reliability coefficient > 0.7. The Spearman’s correlation coefficient also showed significant and acceptable correlation between the items of the 2 forms (P = 0.003, P = 0.45) (20). It seems that they only used formal and content validity analyses to evaluate the teachers’ evaluation forms while, in the current study, the construct validity as well as formal and content validity analyses were used.

Lopez-Barajas and Carrascosa developed a 25-item questionnaire comprising 4 areas (interaction with students, methodology, and instruments and references to evaluate teaching performance). Then, based on the comments of students, the interaction with students was introduced as a better predicting factor in the general evaluation of teachers compared with other factors (21). These results were inconsistent with those of the current study. It seems that the difference between the results of their study and those of the current study came from differences in evaluation forms and questionnaires as well as dissimilarities among the items.

The construct evaluation was also performed using the confirmatory factor analysis with LISREL software, and results were accepted. The teaching performance questionnaire was identified as a suitable instrument to evaluate teachers. In the current study, χ2 = 492.62, degree of freedom (df) = 227, and P < 0.001. Additionally, the root mean square error of approximation = 0.88, indicating that the model was desirable. χ2/ df < 3, and goodness of fit index, confirmatory fit index, normal fit index, and non-normed fit index were > 90%. In addition, t-value and significant coefficients of each variable were > 2, < -2. Hence, the model had good fit.

Cronbach’s alpha coefficients were used to evaluate the internal consistency of the instrument and were 0.71, 0.90, and 0.70 for the first, second, and third factors, respectively. Cronbach’s alpha coefficient indicated the relative ratio (RR) of the questionnaire areas to evaluate teaching performance: RR = 0.92 for the questionnaire. In a study by Moreno-Murcia et al., the Cronbach’s alphas were 0.70, 0.91, and 0.79 for the first (presentation), second (planning), and third (conclusion) factors (11), respectively, which was in agreement with the results of the current study. Din et al. extracted content, presentation, presentation services, result, and structure factors in a study on teaching performance evaluation. The Cronbach’s alphas were also 0.93, 0.92, 089, 0.95, and 0.97 for the extracted factors (21), which was similar in presentation and results with findings of the current study.

The test-retest and ICC measurement was used to evaluate the reliability of the instrument in the current study; ICC = 0.882, P < 0.001 confirmed the repeatability of the questionnaire.

4.1. Conclusion

Results of the current study were suitable evidence in the refinement of structure factor and reliability of psychometric adequacy of teaching performance questionnaire in Iran. In other words, the instrument that was psychometrically assessed for the first time in Iran showed that it is useful in the evaluation of teaching performance due to its brevity, fluency, clarity, and understandability and can be used in all private and state universities as well as educational institutes. It seems that the little difference between the factors evaluated in the current study and those of the reference instrument are due to cultural differences in cognitive patterns, the educational culture of universities, and obvious ultrastructural differences.

One of the limitations of the current study was poor cooperation of students in completing the questionnaire, which was resolved by convincing them and explaining the objectives of the study. Use of the convenience sampling method was another limitation of the current study.

Acknowledgements
References
  • 1. Amini M, Honardar M. The view of faculties and medical students about evaluation of faculty teaching experiences [In Persian]. Koomesh. 2008;9(3):171-7.
  • 2. Faradmal J, Asgari G, Shiri H, Faghfourian H, Seidmohammadi A. Comparison of the assessment of professors by students based on two different protocols Asadabad Medical Sciences Faculty, Hamadan University of Medical Sciences [In Persian]. Educ Strategy Med Sci. 2015;8(4):209-14.
  • 3. Emdadi SH, Amani F, Sultanian AR, Behzad I, Maghsoud AH, Fathi Y. A study of reliability and validity of teacher evaluation form and factor's affecting student's evaluation of teacher [In Persian]. Strides Dev Med Educ. 2013;10(1):87-94.
  • 4. Morrison J. ABC of learning and teaching in medicine: Evaluation. BMJ. 2003;326(7385):385-7. doi: 10.1136/bmj.326.7385.385. [PubMed: 12586676].
  • 5. Ghazi Tabatabaee M, Yousefi Afrashteh M. Relationship Analysis of some of the Variables Associated with Teaching Evaluation by Students: An Application of Structural Equation Modeling [In Persian]. Q J Res Plann High Educ. 2012;18(2):83-107.
  • 6. Shakournia A, Elhampour H, Mozaffari A, Dasht Bozorgi B. Ten year trends in faculty members'evaluation results in Jondishapour University of Medical Sciences [In Persian]. Iran J Med Educ. 2008;7(2):309-15.
  • 7. Ziaee M, Miri M, Haji-Abadi M, Azarkar G, Eshbak P. Academic staff and students' impressions on academic evaluation of students in Birjand university of medical sciences [In Persian]. J Birjand Univ Med Sci. 2006;13(4):61-7.
  • 8. Aliasgharpour M, Monjamed Z, Bahrani N. Factors affecting students' evaluation of teachers: Comparing viewpoints of teachers and students [In Persian]. Iran J Med Educ. 2010;10(2):186-94.
  • 9. Javaherizadeh N. The factors affecting the evaluation of faculty members by the students of Islamic Azad University Broujerd[In Persian]. J Mod Thoughts in Educ. 2008;3(1):43-63.
  • 10. Hosseini F, Karimi F. Reliability and validity of academic staffs' evaluation questionnaire [In Persian]. Educ Strategy Med Sci. 2013;5(4):223-9.
  • 11. Moreno-Murcia JA, Silveira Torregrosa Y, Belando Pedreño N. Questionnaire evaluating teaching competencies in the university environment. Evaluation of teaching competencies in the university. J New Approaches Educ Res. 2015;4(1):54-61. doi: 10.7821/naer.2015.1.106.
  • 12. Cassidy S. Subjectivity and the valid assessment of pre-registration student nurse clinical learning outcomes: implications for mentors. Nurse Educ Today. 2009;29(1):33-9. doi: 10.1016/j.nedt.2008.06.006. [PubMed: 18707802].
  • 13. Raoufi SH, Seikhaian A, Ebrahimzadeh F, Taheri MJ, Ahmadi P. Designing a noval sheet to evaluate theoretical teaching quality of faculty members based on viewpoints of stakeholders and Charles E. Classick's scholarship principles [In Persian]. J Hormozgan Univ Med Sci. 2010;14(3):167-76.
  • 14. Hossini SM, Sarchami R. Attitude of students of Qazvin Medical University towards priorities in teachers assessment [In Persian]. J Qazvin Univ Med Sci. 2002;6(2):33-7.
  • 15. Shakurnia A, Motlagh ME, Malayeri A, Jouhanmardi AR, Komaili Sani H. Students' opinion on factors affecting faculty evaluation in Jondishapoor Medical University [In Persian]. Iran J Med Educ. 2005;5(2):101-10.
  • 16. Adhami A, Nakhaei N, Fasihi Harandi T, Fattahi Z. Preliminary assessment of the validity and reliability of the evaluation questionnaires by the students regarding teaching methods of the faculty members of Kerman university of medical sciences in 2002-2003 [In Persian]. Strides Dev Med Educ. 2005;1(2):121-9.
  • 17. Ghorbani R, Haji-Aghajani S, Heidarfar M, Andade F, Shams Abad M. Viewpoints of nursing and para-medical students about the features a good University lecturer [In Persian]. Koomesh. 2009;10(2):77-84.
  • 18. Zohoori A, Eslaminejad T. Teachers, effective teaching criteria as viewed by the students of Kerman University of Medical Sciences [In Persian]. Iran J Med Educ. 2004;4(2):65-70.
  • 19. Bastani P, Roullahi N, Tahernejad A. Validity and reliability of teachers` evaluation questionnaires from students point of view in Tehran University of Medical Sciences [In Persian]. Biannual J Med Educ Dev Center, Babol Univ Med Sci. 2015;3(1):7-14.
  • 20. Din R, Zakaria MS, Mastor KA, Embi MA. Construct validity and reliability of the Hybried e-training questionnaire. Proceedings ascilite Melbourne. 2008:1-4.
  • 21. López-Barajas DM, Carrascosa J. Evaluation of teaching in University Dimensions and most relevant variables [In Spanish]. J Educational Research. 2005;23(1):57-84.
  • COMMENTS

    LEAVE A COMMENT HERE: