Analysis of the Quality of Tryout Questions in the Marketing Education Program Based on Pedagogical Content Knowledge for UKPPPG Readiness
Main Article Content
Rachmad Hidayat*
Wening Patmi Rahayu
A fundamental issue in the educational ecosystem is the assessment of learning outcomes. In preparing teachers for the Teacher Professional Education Competency Test (UKPPPG), tryout tests serve as strategic instruments. For the Marketing Education Program, these test items must reflect required pedagogical and professional competencies, particularly Pedagogical Content Knowledge (PCK), which integrates content mastery with teaching strategies. This study aims to analyze the quality of tryout test items based on PCK in preparing Marketing Education Program teachers for the UKPPPG. Using a quantitative-descriptive approach with 108 participants (36 male; 72 female), data were analyzed with SPSS 31, focusing on item validity (point-biserial correlation), reliability (Cronbach's α), discrimination index (D), difficulty level (p), and distractor effectiveness. Results indicate sufficient reliability (α = 0.760), though item quality varied: some items were valid, marginal, or invalid—particularly weak in early indicators. Discrimination indices were mostly "fair," with several items rated "good" (D ≥ 0.40) suitable for retention, while "poor" items (D < 0.20) require revision or replacement. The difficulty distribution was unbalanced for summative testing (easy 34%, medium 37%, difficult 29%), suggesting dominance of easy items that reduce discrimination. Distractor analysis revealed an average of 2–3 functioning distractors per item (66%), though some were implausible and required revision. The implications highlight the need for systematic selection and revision of items (stem, key, and distractors), rebalancing difficulty levels, and repeated pilot testing to ensure the instrument achieves higher validity, reliability, and representativeness of PCK-based teacher competencies in 21st-century marketing education.
Adipat, S., Chotikapanich, R., Laksana, K., Busayanon, K., Piatanom, P., Ausawasowan, A., & Elbasouni, I. (2023). Technological Pedagogical Content Knowledge for Professional Teacher Development. Academic Journal of Interdisciplinary Studies, 12(1), 173. https://doi.org/10.36941/ajis-2023-0015
Anastasi, A., & Urbina, S. (1997). Psychological Testing (7th ed.). Prentice Hall.
Anggreini, D., & Darmawan, C. A. (2017). Analisis Kualitas Soal Try Out Ujian Nasional Dengan Menggunakan Aplikasi Program Anates. JP2M (Jurnal Pendidikan Dan Pembelajaran Matematika), 2(1), 20. https://doi.org/10.29100/jp2m.v2i1.213
Arifin, Z. (2017). Evaluasi Pembelajaran: Prinsip, Teknik dan Prosedur. Remaja Rosdakarya.
Crocker, L. M., & Algina, J. (1986). Introduction to Classical and Modern Test Theory. Holt, Rinehart, and Winston. https://books.google.co.id/books?id=tfgkAQAAMAAJ
DeMars, C. (2010). Item Response Theory. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195377033.001.0001
DeVellis, R. F. (2016). Scale Development: Theory and Applications. SAGE Publications.
Dewi, M. S., Setyosari, P., Kuswandi, D., & Ulfa, S. (2020). Analysis of Kindergarten Teachers on Pedagogical Content Knowledge. European Journal of Educational Research, volume-9-2(volume-9-issue-4-october-2020), 1701–1721. https://doi.org/10.12973/eu-jer.9.4.1701
Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and Validating Test Items. Routledge. https://doi.org/10.4324/9780203850381
Iskandar, A., & Rizal, M. (2018). Analisis kualitas soal di perguruan tinggi berbasis aplikasi TAP. Jurnal Penelitian Dan Evaluasi Pendidikan, 22(1), 12–23. https://doi.org/10.21831/pep.v22i1.15609
Sharma, L. R. (2021). Analysis of Difficulty Index, Discrimination Index and Distractor Efficiency of Multiple Choice Questions of Speech Sounds of English. International Research Journal of MMC, 2(1), 15–28. https://doi.org/10.3126/irjmmc.v2i1.35126
Shulman, L. S. (1986). Those Who Understand: Knowledge Growth in Teaching. Educational Researcher, 15(2), 4. https://doi.org/10.2307/1175860
Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Medical Education, 9(1), 40. https://doi.org/10.1186/1472-6920-9-40
Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, 53–55. https://doi.org/10.5116/ijme.4dfb.8dfd









