Journal Search Engine
Download PDF Export Citation Korean Bibliography PMC Previewer
ISSN : 2288-6087(Print)
ISSN : 2713-7414(Online)
Journal of Korea Society for Simulation in Nursing Vol.12 No.1 pp.1-16
DOI : https://doi.org/10.17333/JKSSN.2024.12.1.1

Virtual Simulation-Based Learning Competency Self-Evaluation Tool: A Methodological Study
가상 시뮬레이션 기반 학습역량 자가평가 측정도구 개발

Mikang Kim1, Sunghee Kim2*
1Associate professor, Department of Nursing, Cheongam College
2Professor, Red Cross College of Nursing, Chung-Ang University

김미강1, 김성희2*
1청암대학교 간호학과, 조교수
2중앙대학교 적십자 간호대학, 교수

No existing or potential conflict of interest relevant to this article was reported.


*Corresponding Author: Kim Sunghee Red Cross College of Nursing, Chung-Ang University 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Republic of Korea Tel: +82-2-820-5985, Fax: +82-2-820-7961 , E-mail: sung1024@cau.ac.kr
20231024 20231204 20240128

Abstract


Purpose: Nursing students' competence in virtual simulation-based learning is a key factor in its success. This study explored the validity and reliability of a virtual-simulation-based learning competency self-evaluation tool for nursing students. Methods: Data were collected from a web-based survey. First, 11 nursing professors participated in a focus group interview, and 7 simulation education experts participated in the preliminary item content validity. The participants in these two aspects were not the same. Then, a preliminary survey was conducted with 15 fourth-year nursing students in I City. Next, based on these three efforts, a final survey comprising 20 evaluation items was developed. This survey was administered to third- and fourth-year nursing students at four nursing colleges in Korean provinces (Seoul, Gyeonggi, Gangwon, and Gyeongsan-do); 222 complete questionnaires were used for the final analysis. Further, Kirkpatrick’s evaluation model was used for four steps each of tool development and verification processes of the associated psychometric aspects, for a total of eight steps. An exploratory factor analysis was performed on the collected survey data, and verify the tool's validity and reliability. Results: Four factors comprising 15 items explained 66.59% of the variance: learning preparation and start-up (4 items), nursing assessment (3 items), data interpretation (3 items), and problem solving (5 items). The Cronbach's α of the tool was 0.74, and that of the factors ranged from 0.72 to 0.80. Conclusion: The tool's validity and reliability were demonstrated using established methodologies. This tool can be useful for evaluating Korean nursing students' virtual simulation learning competence.




초록


    Ⅰ. INTRODUCTION

    1. Background

    Simulation-based education can improve nursing students’ understanding of major topics and stimulate learning through exposure to virtual simulations of real-world situations. The methodology has been applied in the field of education to address the inherent limitations of clinical practice (Jeon, 2019). Since the beginning of 2020, the coronavirus disease- 19 (COVID-19) pandemic has presented an unprecedented global public health crisis (Korea Disease Control and Prevention Agency, 2020). To prevent the spread of COVID-19, most universities have switched to remote education, including nursing practice education (Fogg et al., 2020). Online virtual simulation presents a new paradigm for education. There has been a worldwide rise in the demand for virtual simulation education, which is expected to increase (Tabatabai, 2020).

    Indeed, there is a growing interest in utilizing advanced technologies to replace high-fidelity simulators used in face-to-face simulation education, owing to their limitations (e.g., large spaces for allocation and high operational and maintenance costs), with virtual simulation education suggested as an alternative (Kang, Kim, Lee, Nam, & Park, 2020). In virtual simulation education, learners use the Internet via a computer to access a 3D virtual reality and communicate/interact with other participants to solve problems approximating those in actual clinical environments (Gu et al., 2017). It allows for repeated learning experiences, achievement of learning goals through self-directed learning, easy access for learners familiar with online environments (Kim, Kim, & Lee, 2019), and a deeply immersive learning process; it also has a positive effect on nursing students’ participation, knowledge, and confidence (Verkuyl, Romaniuk, Atack, & Mastrilli, 2017). Costeffective and accessible virtual methodologies for simulated learning may improve the efficiency of practical nursing education.

    Previous studies on this subject have analyzed the educational effects and the usability of and developed scenarios for virtual simulation- based education. The positive effects of virtual simulation-based education on nursing knowledge, participation, and educational satisfaction among learners have been confirmed (Gu et al., 2017). A debriefing methodology that can be used in virtual simulations has also been developed (Cheng et al., 2020). Some studies conducted in Korea compared the educational effects of virtual simulation-based education on two groups by integrating virtual simulation use experience (Kim, Kim, & Min, 2020), high-fidelity simulator application, and virtual simulation (Kim, Kim, et al., 2019). Further, tool for evaluating patients in a web-based virtual simulation environment have also been developed (Georg, Karlgren, Ulfvarson, Jirwe, & Welin, 2018). This tool is the vP-LCJR (virtual patient of the Lasater Clinical Judgment Rubric) for virtual patients. As a tool for evaluating students' clinical judgments in virtual simulation, it is the same as the existing four stages of the LCJR, and consists of 11 items (Georg et al., 2018). The detailed items of the existing clinical judgment rubric were modified according to the virtual patient(Georg et al., 2018), and it was developed as a tool suitable for the virtual environment, and it is easy to measure the knowledge and cognitive ability of nursing students in a virtual safe environment(Georg et al., 2018).

    In Korea, virtual simulation-based education has been introduced in nursing colleges, and its use is gradually increasing. However, the period of use for this educational approach is short, and that of the virtual simulation evaluation tool, which is used for evaluating students’ learning in face-to-face simulations, remains the same (Adamson, Kardong-Edgren, & Willhaus, 2013). However, virtual simulation-based education entails selfdirected learning, which differs from the teaching method used in face-to-face simulations. Therefore, there is a need for a standardized evaluation tool that can measure individuals’ learning competencies in virtual simulation environments. In previous studies, most simulation evaluation tools according to nursing competency have been developed (Cha, Kim & Park, 2022), and literature related to virtual simulation learning competency has not yet been reported. Therefore, in this study, we aimed to develop a standardized learning competency self-evaluation measurement tool that can be used in virtual simulation-based education for nursing students. This study may contribute to the literature by providing a tool that can be used to evaluate learning competency levels among Korean students, thereby helping to increase the efficiency of nursing practice curricula as well as the quality and educational outcomes of nursing simulation practice.

    2. Purpose

    The current study aimed to develop a self-evaluation measurement tool for learning competency that can be used in virtual simulation education for nursing students. Our findings will contribute to the literature by providing a tool for measuring Korean students’ learning competence, thereby increasing the efficiency of nursing practice curricula and the quality and educational outcomes of nursing simulation practice.

    Ⅱ. METHODS

    1. Study design

    The current study had a methodological design to verify reliability and validity by developing of the virtual simulation-based learning competency a self-evaluation tool for effective nursing simulation evaluation.

    2. Research Process

    This study was conducted following methodology procedure proposed by DeVellis (2017). The study was largely divided into four steps in the tool development process and four steps in the validation process, and consisted of a total of eight steps (Figure 1). In the tool development stage, the concept of the tool was derived through the consideration of domestic and foreign literature, the concept of the tool was derived through the focus group interview, the initial item composition of the tool, the measurement category determination, the content validity, and the preliminary survey were conducted. In the stage of the tool validation, the final tool of the tool was completed after conducting the main survey of the developed tool and verifying the validity and reliability of the tool.

    3. Tool Development

    To derive the conceptual characteristics of the tool, the literature was reviewed following the Preferred Reporting Items for Systematic reviews and Meta-analysis (PRISMA) guidelines (Moher et al., 2015). Articles were collected for analysis using the following search terms: “Simulation evaluation tool,” “Virtual Simulation evaluation Instrument,” “Virtual Simulation,” “Learning,” and “Rubric.” The literature review included articles published until October 2019, when the last search was conducted; there were no restrictions on the year of publication.

    Focus group interviews were conducted to develop an in-depth understanding of the factors necessary for the development of the instrument. Eleven nursing professors in virtual simulation-based nursing education experience, who understood the purpose of the study and agreed to participated in the interview, used the convenience sampling method received consent for the study in writing and divided into three groups of 3–5 people from each nursing college institution, and data collection areas were implemented in Seoul, Incheon, and Daegu. The interview subject was ‘Virtual simulation evaluation’, the interview questions were ‘How do you operate the virtual simulation?’, ‘Was there an assessment tool used to evaluate students during the virtual simulation?’, ‘If you used an assessment tool, how did you use it with the students?’, ‘What concepts do you think should be included in the factors that evaluate virtual simulation?', 'When you developed it yourself, have you ever used it in connection with the nursing process?', 'Is there anything you thought was good or weakness in using the evaluation tool?', etc. The interviews were conducted from September 15 to October 15, 2019, lasting approximately one hour per session. Data were analyzed using qualitative content analysis. The properties of the virtual simulation evaluation derived through this could be confirmed as integration of knowledge and information, critical thinking, nursing process, limitations of performance, and integrated thinking aspects.

    Following Kirkpatrick & Kirkpatrick (2006) evaluation model to construct a conceptual framework from the reviewed literature, the interviews served to derive attributes and sub of the concepts obtained from the literature review, along with any newly generated concepts. After analyzing the concepts’ properties, a preliminary tool was developed, with items rated on a Likert-type scale (DeVellis, 2017).

    Content validity ensures that a tool covers all aspects of the concept to be measured (Tak, 2013). We selected items based on expert assessments of the items using the Item-Content Validity Index (I-CVI); values ≥ .80 were deemed as appropriate (Tak, 2013). To obtain clear and accurate data, experts determined content validity over two rounds. Following the recommended interval of 10–14 days between content validity tests (Lynn, 1986), there was a 14-day interval between the first and second rounds. Seven experts (four nursing professors, one pedagogical professor, and two clinical simulation center instructors), who understood the study aims and consented to participate in the study, confirmed the content validity of the tool. As a result of calculating the first content validity index for 32 items, a total of 21 items were derived after deleting 11 items with a CVI value of less than 0.80 for each item. After that, a total of 21 items were derived through the revision of the second content validity result.

    Based on these items, preliminary surveys were conducted with 15 fourth-grade nursing students located in I city from February 26–28, 2020, to verify the comprehensibility of items, suitability intelligibility of the survey length and font size, as well as the optimal time and location for application of the survey. The selection criteria were students who had never experienced virtual simulation education before and those who had completed the myocardial infarction nursing education theory class. Those who had previously experienced virtual simulation education were excluded. The response to the questions of the tool is a Likert scale from 1 (not at all) to 5 (very much). The higher the score, the higher the self-assessment ability of virtual simulation-based learning competencies. Students also were requested to rate their level of understanding of the preliminary items on a four-point scale, ranging from 1 (very difficult) to 4 (very easy); if necessary, they could explain their ratings. The final items were devised based on participants’ feedback from the preliminary survey and the advice of one nursing professor. As a result of the preliminary survey, item 11, which suggested that the meaning of the sentence was ambiguous, was deleted. As a result, a total of 20 evaluation items were completed and a questionnaire was prepared for main survey.

    4. Tool Validation

    The selection criteria for main survey were nursing students in their 3rd and 4th years at 4 nursing colleges in Korea provinces (Seoul, Gyeonggi, Gangwon, and Gyeongsang Province) who had never experienced virtual simulation education before and who used a high-fidelity simulator. Students who had experienced education and completed myocardial infarction nursing education theory classes as part of the adult nursing curriculum and who had previously experienced virtual simulation education were excluded. Data were collected from March 10 to April 15, 2020.

    To conduct an exploratory factor analysis and verify the construct validity of a tool, based on the fact that the minimum number of participants should be 200, preferably equal to 5–10 times the number of items in the tool (DeVellis, 2017) 226 students participated in the main survey. The data from 222 responses (valid response rate: 98%) were used in the final analysis; four partially completed questionnaires were excluded.

    Participants completed the survey using their own computers from any location; Internet access was the only requirement, because of the survey’s web-based nature. The Nursing- Korean version of the virtual simulation vSim® provides 10 scenarios for the adult nursing module (Laerdal Medical, 2020); for the survey, we provided students with a link to the “Acute Myocardial Infarction” scenario. In this scenario, the patient, Kim Jinsoo (54 years old, male), is sent to the emergency room owing to angina pectoris, where he experiences a sudden increase in chest pain, ventricular fibrillation (VF), and a progressing cardiac arrest. Finally, he is required to undergo cardiopulmonary resuscitation, which is performed according to the protocol of a local medical institution. The survey took 15–20 minutes to complete, with an average of 17 minutes in the current study.

    Construct validity was determined through item analyses which entailed calculating means, standard deviations, skewness and kurtosis, item-total score correlation, Kaiser-Meyer- Olkin values and Bartlett’s sphericity test, and item communality and exploratory factor analyses, which entailed conducting a principal component analysis (PCA) using the right-angled Varimax rotation method.

    In this study, Cronbach’s α coefficient was used to determine internal consistency reliability, with a confidence interval ≥ .70 confirming internal consistency (Iacobucci & Duhachek, 2003). All analyses were conducted using IBM SPSS Statistics, version 25.0

    5. Ethical Considerations

    This study was conducted after obtaining approval from the [Chung-Ang University] on August 7, 2019 (IRB No: [1041078-201905- HR-168-01]). An informed consent form was attached to the pilot- and web-based surveys and had to be signed by participants before questionnaire completion. Information on study aims, voluntary participation, the guarantee of anonymity, and the use of research data was included in the informed consent form. Participants in all the processes of this study included that the study could be withdrawn at any time if they did not want to, and the collected data were encrypted and stored for the personal information of the subject, and the personal information management file was thoroughly managed to ensure confidentiality.

    Ⅲ. Results

    1. General Characteristics of Main Survey Participants

    The average age of student was 23.9 years, and 189 female (85.14%) and 33 male (14.86%) were selected as the gender. A total of 226 questionnaires were collected and 222 questionnaires (98% recovery rate) were used for the final analysis. The location of nursing education institutions was 73 persons (32.88%) in Seoul area, 74 persons (33.33%) in Kangwon area, 75 persons (33.78%) in Gyeongsang area, and 15 persons (6.76%) in 3rd grade and 207 persons(93.24%) in 4th grade. 215 students (96.85%) had never experienced virtual simulation training before, 162 students (72.97%) had previously experienced high-fidelity simulator training, and 222 students (100%) had previously received myocardial infarction nursing education theory training. In addition, satisfaction with life in the nursing department was found to be very satisfied 32 (14.41%), satisfied 100 (45.05%), moderate 83 (37.39%), and unsatisfied 7 (3.15%).

    2. Item analysis – Mean, standard deviation, skewness, and kurtosis

    As a basic task for item analysis, the average and standard deviation of each item were checked to determine the appropriateness of the 20 items that make up the virtual simulation learning competency self-assessment measurement tool, and skewness and kurtosis were reviewed. Normality was confirmed. As a result of analyzing each question developed in this study, the average score of each question was 2.86~4.22, skewness was –0.82~ 0.17, and kurtosis was -0.54~2.18, so the absolute value of skewness was greater than 3.0 or Since there was no absolute value of kurtosis greater than 8.0, the assumption of normality was established for all questions.

    3. First Exploratory Factor Analysis

    A PCA was used to reduce the number of factors for 17 items selected after item analysis, which was done while minimizing information loss through exploratory factor analysis. Orthogonal rotation, using the Varimax rotation method, was employed when the correlation between factors was not assumed to exist. To determine the number of factors, an eigenvalue ≥ 1 was used (Kaiser, 1974).

    In the first exploratory factor analysis, the Learning preparation and start-up factor (five items) explained 33.89% of the variance, Nursing assessment (four items) explained 11.17%, Problem solving (four items) explained 8.61%, Data interpretation (three items) explained 7.96%, and Detailed action plan (one item) explained 5.98%. They cumulatively explained 67.60% of the variance (Table 1).

    4. Scree Test and Parallel Analysis

    The number of factors was determined through a scree test, parallel analysis, and cumulative variance ratio. Parallel analysis is considered insignificant if the ratio of the eigenvalue from the study data is lower than that of the parallel analysis (DeVellis, 2017); the cumulative dispersion ratio should be over 60% (Hair, Black, Babin, & Anderson, et al., 2010).

    When comparing the results of the scree test and parallel analysis, leveling occurred after factor 4 (Data interpretation), and the eigenvalue of the parallel analysis in the data sloped right and downward. The section where parallelism occurs (i.e., eigenvalue after the Data interpretation factor) is reversed and broken—this section is non-significant. Thus, the analysis for four factors showed that the eigenvalue of the study data was larger than that of the parallel analysis.

    When analyzed with the five factors of the first exploratory factor analysis, cumulative variance increased to 67.60%; however, the Detailed action plan factor comprised only one item, not being suitable for an independent factor. The scree test and parallel analysis indicated that a questionnaire comprising four factors was the most suitable.

    5. Second Factor Analysis

    Using the four factors derived in the previous step, a second factor analysis was conducted. A factor loading value ≥ 0.3 is significant, ≥ 0.4 is more significant, and ≥ 0.5 is very significant (Seong, 2014); a loading value ≥ 0.4 was the threshold in the current study. Results showed that Learning preparation and start-up (four items) explained 33.89% of the variance, Nursing assessment (six items) explained 11.17%, Problem solving (four items) explained 8.61%, and Data interpretation (three items) explained 7.96%; they cumulatively explained 67.52% of the variance. Loading values were below the threshold (i.e., ≤ 0.4) for items 11 (i.e., “I run the virtual simulation repeatedly until I get the desired score.”) and 14 (i.e., “I plan to perform each detail of the nursing interventions to solve the problem.”); both were eliminated.

    6. Third Factor Analysis

    A third factor analysis was performed on the remaining 15 items, using the PCA method by Varimax rotation. Four factors with an eigenvalue ≥ 1 were extracted, cumulatively explaining 66.59% of the variance. Learning preparation and start-up (three items) explained 36.01% of the variance, Nursing assessment (four items) explained 12.36%, Problem solving (five items) explained 9.50%, and Data interpretation (three items) explained 8.71% (Table 2). A four-factor 15-item tool was deemed as the most suitable.

    7. Factor Naming

    The naming of factors is done in order starting from the meaning of the items with the highest factor loading, and the common meaning of the items for each factor is taken as a concept. The concept of the virtual simulation learning competency self-assessment measurement tool, which consists of a total of 15 items and 4 factors (Table 4), is as follows. It was interpreted and named as follows. The first factor consisted of four items and was judged to be related to planning and checking performance after confirming one's own goal setting when starting learning, so it was named ‘learning preparation and start-up’ factor. The second factor consists of three items. It was judged to be related to the assessment of the nursing process, which includes collecting and confirming various information to solve the subject's problems, and was named the ‘nursing assessment’ factor. The third factor consists of three items, and the questions identified in this study were named the ‘data interpretation’ factor because they involve critical interpretation of various data in accordance with the clinical situation. The fourth factor consists of five items, and the questions that appeared in this study were named the ‘problem solving’ factor because they were related to the ability to select the best method after reviewing various options using expert knowledge.

    8. Reliability Test

    Upon testing the reliability of the final four-factor, 15-item tool, we obtained the following Cronbach’s α coefficients: .74 for the total tool, .80 for Learning preparation and start-up, .80 for Nursing assessment, .72 for Data interpretation, and .78 for Problem solving (Table 3).

    9. Final Questionnaire

    The Virtual Simulation-based Learning Competency Self-evaluation tool was confirmed to comprise 15 items and four factors through validation and reliability verification (Table 4). The response scale was defined as a five-point Likert-type scale, ranging from 1–5, with total scores ranging from 15–75. The higher the score, the higher the student’s self-evaluated virtual simulation learning competency.

    Ⅳ. Discussion

    This study applied the methodology of DeVellis (2017) and the evaluation model of Kirkpatrick and Kirkpatrick (2006) to develop a self-evaluation tool for Korean nursing students to assess their learning competency in virtual simulation education. In order to derive the attributes of the concept of this tool and to establish the theoretical basis, systematic literature review of domestic and oversea previous studies was conducted. Based on this, the evaluation tool developed in this study was derived from the core competencies and core elements of American Association of Colleges of Nursing [AACN](1998, 2008) and the program learning performance indicators of Korean Accreditation Board of Nursing Education [KABONE](2017) as representative documents. As the outcome-based curriculum became more important to improve the quality of nursing education, it was thought that an evaluation tool that reflects learning outcomes was needed when evaluating the performance of nursing college students during nursing simulation practice. Four factors were identified through exploratory factor analyses. The Learning preparation and start-up factor (four items) explained most of the variance (36%). The other factors showed a significant correlation with this factor, thus being an important aspect of the tool. This factor comprised items regarding the general evaluation of the scenario at the beginning of the virtual simulation, data from the electronic medical records, scenario content, and immersion in the learning process. Prior research has demonstrated that virtual simulation education, which requires learners to generate and execute new knowledge independently, entails self-motivation and initiative from learners (Kim et al., 2020), concurring with our results. What is required at this time is the ability to prepare and immerse in learning, and it supports that the ability to prepare and immerse in learning is an important factor in simulation education, which is a comprehensive performance learning in which knowledge, skills, and attitudes are learned together (Kim & Suh, 2012). As virtual simulation is characterized by self-directed learning, measuring students’ ability to prepare for and immerse themselves in the experience may be essential for the next step of the learning process, and our results confirm that these organically connect with other aspects of simulation education.

    The Nursing assessment factor (three items) explained 12.7% of the variance and comprises items regarding the collection of basic information from the available patient information, collecting symptom-related information for confirming patient status, and students’ immersion in the virtual simulation by checking objective nursing-related information. Similar factors are presented in other evaluation tools (Mikasa, Cicero, & Adamson, 2013), indicating that nursing assessment is a major factor in virtual simulation education. When providing nursing care, assessment serves as a ba-sis to ensure that a scientific approach is used to address patient problems. As learning applied to various clinical cases is essential in nursing education (Park & Yu, 2019), prior research and our findings seem to show that nursing assessment is important, even in virtual simulation education. Therefore, assessment-related items seem to be a priority in the evaluation of virtual simulation education. This may be because of the limited environment provided to students in virtual settings, as they can only acquire information from the icons on the computer screen and by questioning the virtual patients. It is important to make a careful decision because the contents of the scenario and the nursing evaluation score change in real time depending on the item the student clicks on the icon.

    The Data interpretation factor (three items) explained 8.7% of the variance. It comprises items on the correct interpretation of virtual patients’ vital signs, symptoms, and test data which is objective data showing the clinical condition of the patient. These contents can be viewed as prerequisites, after correct interpretation of the clinical data, for problem solving during nursing care. This is similar to clinical reasoning applied to care situations through critical thinking (Hawkins, Elder, & Paul, 2019), through which nurses recognize the importance of patient data to identify and diagnose patients’ current and potential problems. The Data interpretation factor is also considered to be similar to the nursing plan of the nursing process, and it can be seen as a part of how to solve the problem to reach the nursing result. It is proposed as an important strategy for problem solving in clinical situations (Hawkins et al ., 2019).

    The problem solving factor (five items) explained 9.5% of the variance. It comprises the establishment of priorities and the reassessment of virtual patients’ status after the nursing intervention. During virtual simulation education, after exploring the problem- solving methods using information of the scenario, students should compare the potential nursing interventions to select the optimal method for the specific case. Problem solving, as an intellectual and creative ability, is applied after recognizing the difference between the current state and the learning objectives (Kim et al., 2019) and is recognized as an essential ability for nursing students to successfully adapt to clinical practice after graduation (Kim, 2014). Virtual simulation education is considered to cultivate clinical performance because it allows students to participate in creative thinking and highlevel problem solving by exposing them to various situations that are difficult to experience in clinical practice (Kim et al., 2020). Considering our findings and those of previous studies, the problem solving factor can be considered significant.

    The exploratory factor analyses showed that the tool should have a four-factor, 15- item structure. The internal consistency, reliability, and item homogeneity of the tool were properly verified; thus, it can be considered as suitable and reliable to evaluate virtual simulation education in the clinical nursing practice setting. Additionally virtual simulation education may enable students to maintain their engagement in clinical practice learning and experience various clinical cases, providing them with an opportunity to apply nursing care and knowledge to a virtually-represented patient. The four-factor, 15-item Virtual Simulation-based Learning Competency Selfevaluation tool was shown to have proper reliability and validity to identify the major factors regarding learning competency for the Nursing-Korean version of vSim®. The tool may allow nursing students to accurately and clearly assess their learning competence in the following aspects of virtual simulation education for clinical practice: learning preparation and start-up, nursing assessment, data interpretation, and problem solving. It can be said to be a useful tool to evaluate learners' experiences and performances in virtual simulation practice of nursing college students.

    1. Limitations

    Several limitations should be considered when interpreting the current results. First, the virtual simulation experience of nursing students was limitedly identified through the survey, and the group of nursing students who had experienced virtual simulation was not interviewed during focus group interviews. Second, considering that virtual simulation can be used not only in universities but also in clinical area, it is difficult to generalize the results of this study because the subjects consist only of nursing students in some areas. Further research is needed to validate the tool for different populations and settings.

    Ⅴ. Conclusions

    Using reliable tool development and verification methodologies, our findings demonstrated that the Virtual Simulation-based Learning Competency Self-evaluation tool, targeted at Korean nursing college students, is both valid and reliable. It comprises 15 items categorized into four factors: Learning preparation and start-up (four items), Nursing assessment (three items), Data interpretation (three items), and Problem solving (five items). Responses are recorded on a five-point Likerttype scale, ranging from 1–5 (not at all–very much); the higher the score, the higher the students’ virtual simulation-based learning competence. The tool required 10–15 minutes for completion.

    It is believed that nursing students’ understanding of virtual simulation education will be broadened in the future. Therefore, applying the tool after nursing students undergo virtual simulation-based clinical practice may be useful for producing data on students’ self-evaluation about their competence in this novel type of learning. Follow-up studies should test the validity of our findings by evaluating the tool through confirmatory factor analyses. Moreover, our literature review showed a lack of research comparable to the current study, warranting future studies on interventions and tool development for evaluating virtual simulation education.

    Figure

    JKSSN-12-1-1_F1.gif
    Research procedure

    Table

    Results of the First Factor Analysis (N = 222)
    F1: Learning preparation and start-up, F2: Nursing assessment, F3: Problem solving, F4: Data interpretation, F5: Detailed action plan
    Results of the Final Factor Analysis (N = 222)
    F1: Learning preparation and start-up, F2: Nursing assessment, F3: Problem solving, F4: Data interpretation
    Results of the Reliability Analysis (N = 222)
    SD: Standard deviation
    Description of the Final 15 Items of the Virtual Simulation-based Learning Competency Self-evaluation Tool

    Reference

    1. Adamson, K. A. , Kardong-Edgren, S. , & Willhaus, J. (2013). An updated review of published simulation evaluation instruments. Clinical Simulation in Nursing, 9(9), e393- e400.
    2. American Association of Colleges of Nursing [AACN]. (1998, 2008). The essentials baccalaureate education for professional nursing practice. Washington, DC: Author.
    3. Cha, J. Y. , Kim, D. Y. , & Park, S. Y. (2022). Review for development of nursing simulation education evaluation instruments. Journal of Korean Society for Simulation in Nursing, 10(2), 47-63.
    4. Cheng, A. , Kolbe, M. , Grant, V. , Eller, S. , Hales, R. , Symon, B. , et al. (2020). A practical guide to virtual debriefings: Communities of inquiry perspective. Advances in Simulation, 5(1), 1-9.
    5. DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). CA: SAGE Publications.
    6. Fogg, N. , Wilson, C. , Trinka, M. , Campbell, R. , Thomson, A. , Merritt, L. , et al. (2020). Transitioning from direct care to virtual clinical experiences during the COVID-19 pandemic. Journal of Professional Nursing, 36(6), 685-691.
    7. Georg, C. , Karlgren, K. , Ulfvarson, J. , Jirwe, M. , & Welin, E. (2018). A rubric to assess students’ clinical reasoning when encountering virtual patients. Journal of Nursing Education, 57(7), 408-415.
    8. Gu, Y. , Zou, Z. , & Chen, X. (2017). The effects of vSIM for Nursing™ as a teaching strategy on fundamentals of nursing education in undergraduates. Clinical Simulation in Nursing, 13(4), 194-197.
    9. Hair, J. F. , Black, B. , Babin, B. J. , & Anderson, R. E. (2010). Multivariate data analysis: Global edition (7th ed.). Pearson Education.
    10. Hawkins, D. , Elder, L. , & Paul, R. (2019). The thinker’s guide to clinical reasoning: Based on critical thinking concepts and tools. Lanham: Rowman & Littlefield.
    11. Iacobucci, D. , & Duhachek, A. (2003). Advancing alpha: Measuring reliability with confidence. Journal of Consumer Psychology, 13(4), 478-487.
    12. Jeon, H. J. (2019). Exploring study on virtual reality utilization strategies in scenario-based nursing simulation: An integrative review. Journal of Korean Society for Simulation in Nursing, 7(1), 45-56.
    13. Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31-36.
    14. Kang, S. , Kim, C. , Lee, H. S. , Nam, J. W. , & Park, M. S. (2020). Integrative review on nursing education adopting virtual reality convergence simulation. Journal of Convergence for Information Technology, 10(1), 60-74.
    15. Kim, D. H. (2014). Metacognition and problem solving ability among nursing students in Korea. Global Health & Nursing, 4(1), 42-48.
    16. Kim, H. W. , & Suh, E. Y. (2012). Nursing students’ immersion experience in a comprehensive simulation scenario using high fidelity human patient simulator among nursing students: A phenomenological study. Journal of Military Nursing Research, 30(1), 89-99.
    17. Kim, M. , Kim, S. , & Lee, W. S. (2019). Effects of a virtual reality simulation and a blended simulation of care for pediatric patient with asthma. Child Health Nursing Research, 25(4), 496-506.
    18. Kim, Y. , Kim, W. J. , & Min, H. Y. (2020). Nursing students’ experiences in virtual simulation practice. The Journal of Korean Academic Society of Nursing Education, 26(2), 198-207
    19. Kirkpatrick, D. L. , & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels (3rd ed.). Berrett-Koehler Publishers.
    20. Korea Disease Control and Prevention Agency. (2020, November 16). Coronavirus Disease-19, Republic of Korea Public Advice & Notice. Retrieved October 02, 2022, from: https://www.kdca.go.kr//
    21. Korean Accreditation Board of Nursing Education [KABONE]. (2017, January 31). Nursing education certification assessment core basic nursing assessment protocol (version 4.1). Retrieved May 02, 2020, from: http://www.kabone.or.kr/index.do
    22. Laerdal Medical. (2020, May 16). vSimⒹ for nursing-medical surgical Korean. (pilot version) Retrieved March 02, 2020, from: https://thepoint.lww.com/vsim
    23. Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382-385.
    24. Mikasa, A. W. , Cicero, T. F. , & Adamson, K. A. (2013). Outcome-based evaluation tool to evaluate student performance in high-fidelity simulation. Clinical Simulation in Nursing, 9(9), e361-e367.
    25. Moher, D. , Shamseer, L. , Clarke, M. , Ghersi, D. , Liberati, A. , Petticrew, M. , et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA -P) 2015 statement. Systematic Reviews, 4(1), 1.
    26. Park, K. O. , & Yu, M. (2019). Effect of a situational module learning course on critical thinking disposition and metacognition in nursing students: A quasi-experimental study. The Journal of Korean Academic Society of Nursing Education, 25(2), 251-262.
    27. Seong, T. J. (2014). Statistical analysis using the easy-to-understand SPSS/AMOS (2nd ed.). Seoul: Hakjisa Publisher.
    28. Tabatabai, S. (2020). Simulations and virtual learning supporting clinical education during the COVID-19 pandemic. Advances in Medical Education and Practice, 11, 513-516.
    29. Tak, J. K. (2013). Psychological testing: An understanding of development and evaluation method. Seoul: Hakjisa Publisher.
    30. Verkuyl, M. , Romaniuk, D. , Atack, L. , & Mastrilli, P. (2017). Virtual gaming simulation for nursing education: An experiment. Clinical Simulation in Nursing, 13(5), 238-244.
    1. SEARCH
    2. Online Submission

      submission.kssn.or.kr

    3. KSSN

      Korean Society for
      Simulation in Nursing

    4. Contact us

      simulationkorea@gmail.com(General)
      simulationkorea.edit@gmail.com(Publishing)
      kana20th@naver.com
      jkana@kanad.or.kr