Medicine Public Health, Environmental and Occupational Health

Innovations in Medical Education

Description

This cluster of papers focuses on the transformation of medical education and the development of professionalism in healthcare. It covers topics such as competency-based education, assessment methods, continuing medical education, reflective practice, clinical skills development, faculty development, e-learning in healthcare, patient-oriented learning, and leadership in medicine.

Keywords

Competency-based Education; Professional Identity Formation; Assessment Methods; Continuing Medical Education; Reflective Practice; Clinical Skills Development; Faculty Development; E-Learning in Healthcare; Patient-Oriented Learning; Leadership in Medicine

The Institute of Medicine study Crossing the Quality Chasm (2001) recommended that an interdisciplinary summit be held to further reform of health professions education in order to enhance quality and … The Institute of Medicine study Crossing the Quality Chasm (2001) recommended that an interdisciplinary summit be held to further reform of health professions education in order to enhance quality and patient safety. Health Professions Education: A Bridge to Quality is the follow up to that summit, held in June 2002, where 150 participants across disciplines and occupations developed ideas about how to integrate a core set of competencies into health professions education. These core competencies include patient-centered care, interdisciplinary teams, evidence-based practice, quality improvement, and informatics. This book recommends a mix of approaches to health education improvement, including those related to oversight processes, the training environment, research, public reporting, and leadership. Educators, administrators, and health professionals can use this book to help achieve an approach to education that better prepares clinicians to meet both the needs of patients and the requirements of a changing health care system.
Throughout this century there have been many efforts to reform the medical curriculum. These efforts have largely been unsuccessful in producing fundamental changes in the training of medical students. The … Throughout this century there have been many efforts to reform the medical curriculum. These efforts have largely been unsuccessful in producing fundamental changes in the training of medical students. The author challenges the traditional notion that changes to medical education are most appropriately made at the level of the curriculum, or the formal educational programs and instruction provided to students. Instead, he proposes that the medical school is best thought of as a "learning environment" and that reform initiatives must be undertaken with an eye to what students learn instead of what they are taught. This alternative framework distinguishes among three interrelated components of medical training: the formal curriculum, the informal curriculum, and the hidden curriculum. The author gives basic definitions of these concepts, and proposes that the hidden curriculum needs particular exploration. To uncover their institution's hidden curricula, he suggests that educators and administrators examine four areas: institutional policies, evaluation activities, resource-allocation decisions, and institutional "slang." He also describes how accreditation standards and processes might be reformed. He concludes with three recommendations for moving beyond curriculum reform to reconstruct the overall learning environment of medical education, including how best to move forward with the Medical School Objectives Project sponsored by the AAMC.
Log in or Register Subscribe to journalSubscribe Get new issue alertsGet alerts Enter your Email address: Wolters Kluwer Health may email you for journal alerts and information, but is committed … Log in or Register Subscribe to journalSubscribe Get new issue alertsGet alerts Enter your Email address: Wolters Kluwer Health may email you for journal alerts and information, but is committed to maintaining your privacy and will not share your personal information without your express consent. For more information, please refer to our Privacy Policy. Subscribe to eTOC Secondary Logo Journal Logo All Articles Images Videos Podcasts Blogs Advanced Search Toggle navigation Subscribe Register Login Articles & Issues Current IssuePrevious IssuesPublished Ahead-of-Print Collections Editorials of Laura Weiss Roberts, MD, MAAM Last PageCOVID-19 and Medical EducationAddressing Race and Racism in Medical EducationeBooksView All For Authors Submit a ManuscriptInformation for AuthorsLanguage Editing ServicesAuthor Permissions Journal Info About the JournalAbout the AAMCJournal MastheadSubmit a ManuscriptAdvertising InformationSubscription ServicesReprints and Back IssuesClassified AdsRights and PermissionsFor ReviewersFor MediaFor Trainees All Articles Images Videos Podcasts Blogs Advanced Search
Teaching medical professionalism is a fundamental component of medical education. The objective is to ensure that students understand the nature of professionalism and its obligations and internalize the value system … Teaching medical professionalism is a fundamental component of medical education. The objective is to ensure that students understand the nature of professionalism and its obligations and internalize the value system of the medical profession. The recent emergence of interest in the medical literature on professional identity formation gives reason to reexamine this objective. The unstated aim of teaching professionalism has been to ensure the development of practitioners who possess a professional identity. The teaching of medical professionalism therefore represents a means to an end.The principles of identity formation that have been articulated in educational psychology and other fields have recently been used to examine the process through which physicians acquire their professional identities. Socialization-with its complex networks of social interaction, role models and mentors, experiential learning, and explicit and tacit knowledge acquisition-influences each learner, causing them to gradually "think, act, and feel like a physician."The authors propose that a principal goal of medical education be the development of a professional identity and that educational strategies be developed to support this new objective. The explicit teaching of professionalism and emphasis on professional behaviors will remain important. However, expanding knowledge of identity formation in medicine and of socialization in the medical environment should lend greater logic and clarity to the educational activities devoted to ensuring that the medical practitioners of the future will possess and demonstrate the qualities of the "good physician."
<h3>Objective.</h3> —To review the literature relating to the effectiveness of education strategies designed to change physician performance and health care outcomes. <h3>Data Sources.</h3> —We searched MEDLINE, ERIC, NTIS, the Research … <h3>Objective.</h3> —To review the literature relating to the effectiveness of education strategies designed to change physician performance and health care outcomes. <h3>Data Sources.</h3> —We searched MEDLINE, ERIC, NTIS, the Research and Development Resource Base in Continuing Medical Education, and other relevant data sources from 1975 to 1994, using<i>continuing medical education</i>(CME) and related terms as keywords. We manually searched journals and the bibliographies of other review articles and called on the opinions of recognized experts. <h3>Study Selection.</h3> —We reviewed studies that met the following criteria: randomized controlled trials of education strategies or interventions that objectively assessed physician performance and/or health care outcomes. These intervention strategies included (alone and in combination) educational materials, formal CME activities, outreach visits such as academic detailing, opinion leaders, patient-mediated strategies, audit with feedback, and reminders. Studies were selected only if more than 50% of the subjects were either practicing physicians or medical residents. <h3>Data Extraction.</h3> —We extracted the specialty of the physicians targeted by the interventions and the clinical domain and setting of the trial. We also determined the details of the educational intervention, the extent to which needs or barriers to change had been ascertained prior to the intervention, and the main outcome measure(s). <h3>Data Synthesis.</h3> —We found 99 trials, containing 160 interventions, that met our criteria. Almost two thirds of the interventions (101 of 160) displayed an improvement in at least one major outcome measure: 70% demonstrated a change in physician performance, and 48% of interventions aimed at health care outcomes produced a positive change. Effective change strategies included reminders, patient-mediated interventions, outreach visits, opinion leaders, and multifaceted activities. Audit with feedback and educational materials were less effective, and formal CME conferences or activities, without enabling or practice-reinforcing strategies, had relatively little impact. <h3>Conclusion.</h3> —Widely used CME delivery methods such as conferences have little direct impact on improving professional practice. More effective methods such as systematic practice-based interventions and outreach visits are seldom used by CME providers. (<i>JAMA</i>. 1995;274:700-705)
The American Council of Graduate Medical Education is moving from accrediting residency programs every 5 years to a new system for the annual evaluation of trends in measures of performance. The American Council of Graduate Medical Education is moving from accrediting residency programs every 5 years to a new system for the annual evaluation of trends in measures of performance.
To avoid many of the disadvantages of the traditional clinical examination we have introduced the structured clinical examination. In this students rotate round a series of stations in the hospital … To avoid many of the disadvantages of the traditional clinical examination we have introduced the structured clinical examination. In this students rotate round a series of stations in the hospital ward. At one station they are asked to carry out a procedure, such as take a history, undertake one aspect of physical examination, or interpret laboratory investigations in the light of a patient9s problem, and at the next station they have to answer questions on the findings at the previous station and their interpretation. As they cannot go back to check on omissions multiple-choice questions have a minimal cueing effect. The students may be observed and scored at some stations by examiners using a check list. In the structured clinical examination the variables and complexity of the examination are more easily controlled, its aims can be more clearly defined, and more of the student9s knowledge can be tested. The examination is more objective and a marking strategy can be decided in advance. The examination results in improved feed-back to students and staff.
The entrustable professional activity (EPA) concept allows faculty to make competency-based decisions on the level of supervision required by trainees. Competency-based education targets standardized levels of proficiency to guarantee that … The entrustable professional activity (EPA) concept allows faculty to make competency-based decisions on the level of supervision required by trainees. Competency-based education targets standardized levels of proficiency to guarantee that all learners have a sufficient level of proficiency at the completion of training.1–6 Collectively, the competencies (ACGME or CanMEDS) constitute a framework that describes the qualities of professionals. Such a framework provides generalized descriptions to guide learners, their supervisors, and institutions in teaching and assessment. However, these frameworks must translate to the world of medical practice. EPAs were conceived to facilitate this translation, addressing the concern that competency frameworks would otherwise be too theoretical to be useful for training and assessment in daily practice.Trust is a central concept for safe and effective health care. Patients must trust their physicians, and health care providers must trust each other in a highly interdependent health care system. In teaching settings, supervisors decide when and for what tasks they entrust trainees to assume clinical responsibilities. Building on this concept, EPAs are units of professional practice, defined as tasks or responsibilities to be entrusted to the unsupervised execution by a trainee once he or she has attained sufficient specific competence. EPAs are independently executable, observable, and measurable in their process and outcome, and therefore, suitable for entrustment decisions. Sequencing EPAs of increasing difficulty, risk, or sophistication can serve as a backbone for graduate medical education.6An EPA must be described at a sufficient level of detail to set trainee expectations and guide supervisor's assessment and entrustment decisions (see table 2 for guidelines).Milestones, as defined by the ACGME, are stages in the development of specific competencies. Milestones may link to a supervisor's EPA decisions (eg, direct proactive supervision versus distant supervision). The Pediatrics Milestone Project provides examples of how milestones can be linked to entrustment decisions.7,8Entrustment decisions involve clinical skills and abilities as well as more general facets of competence, such as understanding one's own limitations and knowing when to ask for help. Making entrustment decisions for unsupervised practice requires observed proficiency, usually on multiple occasions.In practice, entrustment decisions are affected by 4 groups of variables: (1) attributes of the trainee (tired, confident, level of training); (2) attributes of the supervisors (eg, lenient or strict); (3) context (eg, time of the day, facilities available); and (4) the nature of the EPA (rare, complex versus common, easy). Entrustment decisions can be further distinguished as ad hoc (eg, happening during a night shift) or structural (establishing the recognition that a trainee may do this activity at a specific level of supervision from now on). In the clinical context, many ad hoc entrustment decisions happen every day. Structural entrustment decisions formally acknowledge that a trainee has passed a threshold that allows for decreased supervision. The certificate awarded at such occasions has been called a statement of awarded responsibility (STAR) and should be carefully documented.2Linking an EPA with a competency framework emphasizes essential competency domains when observing a trainee executing the EPA.Decide how many EPAs are useful for training.While there can be many EPAs that serve to make ad hoc entrustment decisions, EPAs that lead to structural entrustment decisions (ie, certification or STARs) should involve broad-based responsibilities and be limited in number. For a graduate medical education program, no more than 20 to 30 EPAs are recommended.EPAs can be the focus of assessment. The key question is: Can we trust this trainee to execute this EPA? The answer may be translated to 5 levels of supervision for the EPA:
In the setting of clinical medical education, feedback refers to information describing students' or house officers' performance in a given activity that is intended to guide their future performance in … In the setting of clinical medical education, feedback refers to information describing students' or house officers' performance in a given activity that is intended to guide their future performance in that same or in a related activity. It is a key step in the acquisition of clinical skills, yet feedback is often omitted or handled improperly in clinical training. This can result in important untoward consequences, some of which may extend beyond the training period. Once the nature of the feedback process is appreciated, however, especially the distinction between feedback and evaluation and the importance of focusing on the trainees' observable behaviors rather than on the trainees themselves, the educational benefit of feedback can be realized. This article presents guidelines for offering feedback that have been set forth in the literature of business administration, psychology, and education, adapted here for use by teachers and students of clinical medicine.
The Accreditation Council for Graduate Medical Education began an initiative in 1998 to improve resident physicians' ability to provide quality patient care and to work effectively in current and evolving … The Accreditation Council for Graduate Medical Education began an initiative in 1998 to improve resident physicians' ability to provide quality patient care and to work effectively in current and evolving healthcare delivery systems.This initiative, called the Outcome Project, seeks changes in residency programs that focus education on the competency domains, enhance assessment of resident performance and increase utilization of educational outcomes for improving residents' education. Increased emphasis on educational outcome measures in accreditation is another important goal.A considerable amount of development, dissemination and educational activity has been carried out to support project implementation. Thus far, observed effects include changes to accreditation requirements and information collection and enhancements of the educational environments and curriculum of residency education programs.Prospects for meaningful change are good. Further development of assessment methods is needed to advance in-training evaluation of residents and the ACGME goals for utilizing performance data in accreditation and linking education and patient care quality.
Background: Case-based learning (CBL) is a long established pedagogical method, which is defined in a number of ways depending on the discipline and type of ‘case’ employed. In health professional … Background: Case-based learning (CBL) is a long established pedagogical method, which is defined in a number of ways depending on the discipline and type of ‘case’ employed. In health professional education, learning activities are commonly based on patient cases. Basic, social and clinical sciences are studied in relation to the case, are integrated with clinical presentations and conditions (including health and ill-health) and student learning is, therefore, associated with real-life situations. Although many claims are made for CBL as an effective learning and teaching method, very little evidence is quoted or generated to support these claims. We frame this review from the perspective of CBL as a type of inquiry-based learning.Aim: To explore, analyse and synthesise the evidence relating to the effectiveness of CBL as a means of achieving defined learning outcomes in health professional prequalification training programmes.Method: Selection criteria: We focused the review on CBL for prequalification health professional programmes including medicine, dentistry, veterinary science, nursing and midwifery, social care and the allied health professions (physiotherapy, occupational therapy, etc.). Papers were required to have outcome data on effectiveness. Search strategies: The search covered the period from 1965 to week 4 September 2010 and the following databases: ASSIA, CINAHL, EMBASE, Education Research, Medline and Web of Knowledge (WoK). Two members of the topic review group (TRG) independently reviewed the 173 abstracts retrieved from Medline and compared findings. As there was good agreement on inclusion, one went onto review the WoK and ASSIA EndNote databases and the other the Embase, CINAHL and Education Research databases to decide on papers to submit for coding. Coding and data analysis: The TRG modified the standard best evidence medical education coding sheet to fit our research questions and assessed each paper for quality. After a preliminary reliability exercise, each full paper was read and graded by one reviewer with the papers scoring 3–5 (of 5) for strength of findings being read by a second reviewer. A summary of each completed coding form was entered into an Excel spread sheet. The type of data in the papers was not amenable to traditional meta-analysis because of the variability in interventions, information given, student numbers (and lack of) and timings. We, therefore, adopted a narrative synthesis method to compare, contrast, synthesise and interpret the data, working within a framework of inquiry-based learning.Results: The final number of coded papers for inclusion was 104. The TRG agreed that 23 papers would be classified as of higher quality and significance (22%). There was a wide diversity in the type, timing, number and length of exposure to cases and how cases were defined. Medicine was the most commonly included profession. Numbers of students taking part in CBL varied from below 50 to over 1000. The shortest interventions were two hours, and one case, whereas the longest was CBL through a whole year. Group sizes ranged from students working alone to over 30, with the majority between 2 and 15 students per group. The majority of studies involved single cohorts of students (61%), with 29% comparing multiple groups, 8% involving different year groups and 2% with historical controls. The outcomes evaluation was either carried out postintervention only (78 papers; 75%), preintervention and postintervention (23 papers; 22%) or during and postintervention (3 papers; <3%). Our analysis provided the basis for discussion of definitions of CBL, methods used and advocated, topics and learning outcomes and whether CBL is effective based on the evaluation data.Conclusion: Overwhelmingly, students enjoy CBL and think that it enhances their learning. The empirical data taken as a whole are inconclusive as to the effects on learning compared with other types of activity. Teachers enjoy CBL, partly because it engages, and is perceived to motivate, students. CBL seems to foster learning in small groups though whether this is the case delivery or the group learning effect is unclear.
To assess the impact of diverse continuing medical education (CME) interventions on physician performance and health care outcomes.Using continuing medical education and related phrases, we performed regular searches of the … To assess the impact of diverse continuing medical education (CME) interventions on physician performance and health care outcomes.Using continuing medical education and related phrases, we performed regular searches of the indexed literature (MEDLINE, Social Science Index, the National Technical Information Service, and Educational Research Information Clearinghouse) from 1975 through 1991. In addition, for these years, we used manual searches, key informants, and requests to authors to locate other indexed articles and the nonindexed literature of adult and continuing professional education.From the resulting database we selected studies that met the following criteria: randomized controlled trials; educational programs, activities, or other interventions; studies that included 50% or more physicians; follow-up assessments of at least 75% of study subjects; and objective assessments of either physician performance or health care outcomes.Studies were reviewed for data related to physician specialty and setting. Continuing medical education interventions were classified by their mode(s) of activity as being predisposing, enabling, or facilitating. Using the statistical tests supplied by the original investigators, physician performance outcomes and patient outcomes were classified as positive, negative, or inconclusive.We located 777 CME studies, of which 50 met all criteria. Thirty-two of these analyzed physician performance; seven evaluated patient outcomes; 11 examined both measures. The majority of the 43 studies of physician performance showed positive results in some important measures of resource utilization, counseling strategies, and preventive medicine. Of the 18 studies of health care outcomes, eight demonstrated positive changes in patients' health care outcomes.Broadly defined CME interventions using practice-enabling or reinforcing strategies consistently improve physician performance and, in some instances, health care outcomes.
How many times have we as teachers been confronted with situations in which we really were not sure what to do? We “flew by the seat of our pants,” usually … How many times have we as teachers been confronted with situations in which we really were not sure what to do? We “flew by the seat of our pants,” usually doing with our learners what had been done with us. It would be useful to be able to turn to a set of guiding principles based on evidence, or at least on long term successful experience. ![][1] Fortunately, a body of theory exists that can inform practice. An unfortunate gap between academics and practitioners, however, has led to a perception of theory as belonging to an “ivory tower” and not relevant to practice. Yet the old adage that “there is nothing more practical than a good theory” still rings true today. This chapter describes several educational theories and guiding principles and then shows how these could be applied to three case studies realting to the “real world.” Malcolm Knowles introduced the term “andragogy” to North America, defining it as “the art and science of helping adults learn.” Andragogy is based on five assumptions—about how adults learn and their attitude towards and motivation for learning. #### Andragogy—five assumptions about adult learning ![][2] Learners need to feel safe and comfortable expressing themselves Knowles later derived seven principles of andragogy. Most theorists agree that andragogy is not really a theory of adult learning, but they regard Knowles' principles as guidelines on how to teach learners who tend to be at least somewhat independent and self directed. His principles can be summarised … [1]: /embed/graphic-1.gif [2]: /embed/graphic-2.gif
The purpose of this review is to synthesize all available evaluative research from 1970 through 1992 that compares problem-based learning (PBL) with more traditional methods of medical education. Five separate … The purpose of this review is to synthesize all available evaluative research from 1970 through 1992 that compares problem-based learning (PBL) with more traditional methods of medical education. Five separate meta-analyses were performed on 35 studies representing 19 institutions. For 22 of the studies (representing 14 institutions), both effect-size and supplementary vote-count analyses could be performed; otherwise, only supplementary analyses were performed. PBL was found to be significantly superior with respect to students' program evaluations (i.e., students' attitudes and opinions about their programs)–dw (standardized differences between means, weighted by sample size) = +.55, CI.95 = +.40 to +.70 - and measures of students' clinical performance (dw = +.28, CI.95 = +.16 to +.40). PBL and traditional methods did not differ on miscellaneous tests of factual knowledge (dw = −.09, CI.95 = +.06 to -.24) and tests of clinical knowledge (dw = +.08, CI.95 = −.05 to +.21). Traditional students performed significantly better than their PBL counterparts on the National Board of Medical Examiners Part I examination–NBME I (dw = −.18, CI.95 = −.10 to -.26). However, the NBME I data displayed significant overall heterogeneity (Qt = 192.23, p < .001) and significant differences among programs (Qb = 59.09, p < .001), which casts doubt on the generality of the findings across programs. The comparative value of PBL is also supported by data on outcomes that have been studied less frequently, i.e., faculty attitudes, student mood, class attendance, academic process variables, and measures of humanism. In conclusion, the results generally support the superiority of the PBL approach over more traditional methods. Acad. Med. 68 (1993):550–563.
Core physician activities of lifelong learning, continuing medical education credit, relicensure, specialty recertification, and clinical competence are linked to the abilities of physicians to assess their own learning needs and … Core physician activities of lifelong learning, continuing medical education credit, relicensure, specialty recertification, and clinical competence are linked to the abilities of physicians to assess their own learning needs and choose educational activities that meet these needs.To determine how accurately physicians self-assess compared with external observations of their competence.The electronic databases MEDLINE (1966-July 2006), EMBASE (1980-July 2006), CINAHL (1982-July 2006), PsycINFO (1967-July 2006), the Research and Development Resource Base in CME (1978-July 2006), and proprietary search engines were searched using terms related to self-directed learning, self-assessment, and self-reflection.Studies were included if they compared physicians' self-rated assessments with external observations, used quantifiable and replicable measures, included a study population of at least 50% practicing physicians, residents, or similar health professionals, and were conducted in the United Kingdom, Canada, United States, Australia, or New Zealand. Studies were excluded if they were comparisons of self-reports, studies of medical students, assessed physician beliefs about patient status, described the development of self-assessment measures, or were self-assessment programs of specialty societies. Studies conducted in the context of an educational or quality improvement intervention were included only if comparative data were obtained before the intervention.Study population, content area and self-assessment domain of the study, methods used to measure the self-assessment of study participants and those used to measure their competence or performance, existence and use of statistical tests, study outcomes, and explanatory comparative data were extracted.The search yielded 725 articles, of which 17 met all inclusion criteria. The studies included a wide range of domains, comparisons, measures, and methodological rigor. Of the 20 comparisons between self- and external assessment, 13 demonstrated little, no, or an inverse relationship and 7 demonstrated positive associations. A number of studies found the worst accuracy in self-assessment among physicians who were the least skilled and those who were the most confident. These results are consistent with those found in other professions.While suboptimal in quality, the preponderance of evidence suggests that physicians have a limited ability to accurately self-assess. The processes currently used to undertake professional development and evaluate competence may need to focus more on external assessment.
This is the last in a series of four articles Recent high profile scandals in the United Kingdom have highlighted the changing values by which the National Health Service is … This is the last in a series of four articles Recent high profile scandals in the United Kingdom have highlighted the changing values by which the National Health Service is judged.1 The public expects, and the government has promised to deliver, a health service that is ever safer, constantly up to date, and focused on patients' changing needs. Successful health services in the 21st century must aim not merely for change, improvement, and response, but for changeability, improvability, and responsiveness. Educators are therefore challenged to enable not just competence, but also capability (box). Capability ensures that the delivery of health care keeps up with its ever changing context. Education providers must offer an environment and process that enables individuals to develop sustainable abilities appropriate for a continuously evolving organisation. Recent announcements in the United Kingdom of a “university for the NHS,”2 a “national leadership programme,”3 and “workforce confederations”4 raise the question of what kind of education and training will help the NHS to deliver its goals #### Capability is more than competence Competence —what individuals know or are able to do in terms of knowledge, skills, attitude Capability —extent to which individuals can adapt to change, generate new knowledge, and continue to improve their performance #### Summary points Traditional education and training largely focuses on enhancing competence (knowledge, skills, and attitudes) In today's complex world, we must educate not merely for competence, but for capability (the ability to adapt to change, generate new knowledge, and continuously improve performance) Capability is enhanced through feedback on performance, the challenge of unfamiliar contexts, and the use of non-linear methods such as story telling and small group, problem based learning Education for capability must focus on process (supporting learners to construct their own learning goals, receive feedback, reflect, and consolidate) and avoid goals with rigid and prescriptive content The …
The Carnegie Foundation for the Advancement of Teaching, which in 1910 helped stimulate the transformation of North American medical education with the publication of the Flexner Report, has a venerated … The Carnegie Foundation for the Advancement of Teaching, which in 1910 helped stimulate the transformation of North American medical education with the publication of the Flexner Report, has a venerated place in the history of American medical education. Within a decade following Flexner's report, a strong scientifically oriented and rigorous form of medical education became well established; its structures and processes have changed relatively little since. However, the forces of change are again challenging medical education, and new calls for reform are emerging. In 2010, the Carnegie Foundation will issue another report, Educating Physicians: A Call for Reform of Medical School and Residency, that calls for (1) standardizing learning outcomes and individualizing the learning process, (2) promoting multiple forms of integration, (3) incorporating habits of inquiry and improvement, and (4) focusing on the progressive formation of the physician's professional identity. The authors, who wrote the 2010 Carnegie report, trace the seeds of these themes in Flexner's work and describe their own conceptions of them, addressing the prior and current challenges to medical education as well as recommendations for achieving excellence. The authors hope that the new report will generate the same excitement about educational innovation and reform of undergraduate and graduate medical education as the Flexner Report did a century ago.
Teaching is a demanding and complex task. This guide looks at teaching and what it involves. Implicit in the widely accepted and far-reaching changes in medical education is a changing … Teaching is a demanding and complex task. This guide looks at teaching and what it involves. Implicit in the widely accepted and far-reaching changes in medical education is a changing role for the medical teacher. Twelve roles have been identified and these can be grouped in six areas in the model presented: (1) the information provider in the lecture, and in the clinical context; (2) the role model on-the-job, and in more formal teaching settings; (3) the facilitator as a mentor and learning facilitator; (4) the student assessor and curriculum evaluator; (5) the curriculum and course planner; and (6) the resource material creator, and study guide producer. As presented in the model, some roles require more medical expertise and others more educational expertise. Some roles have more direct face-to-face contact with students and others less. The roles are presented in a 'competing values' framework-they may convey conflicting messages, e.g. providing information or encouraging independent learning, helping students or examining their competence. The role model framework is of use in the assessment of the needs for staff to implement a curriculum, in the appointment and promotion of teachers and in the organization of a staff development programme. Some teachers will have only one role. Most teachers will have several roles. All roles, however, need to be represented in an institution or teaching organization. This has implications for the appointment of staff and for staff training. Where there are insufficient numbers of appropriately trained existing staff to meet a role requirement, staff must be reassigned to the role, where this is possible, and the necessary training provided. Alternatively, if this is not possible or deemed desirable, additional staff need to be recruited for the specific purpose of fulfilling the role identified. A 'role profile' needs to be negotiated and agreed with staff at the time of their appointment and this should be reviewed on a regular basis.
Background The technical skill of surgical trainees is not well assessed. This study aimed (1) to compare the reliability of three scoring systems, (2) to compare live and bench formats … Background The technical skill of surgical trainees is not well assessed. This study aimed (1) to compare the reliability of three scoring systems, (2) to compare live and bench formats and (3) to assess construct validity of a test of operative skill. Methods Parallel examinations of operative skill, one using live animals and one using simulations, were developed. Performance was graded using operation-specific checklists, detailed global rating forms and pass/fail judgements. Twenty surgical residents each took both formats. Results Disattenuated correlations between live and bench scores were high (0.69–0.72). Mean inter-rater reliability across stations ranged from 0.64 to 0.72. Internal consistency was moderate to high (α: 0.61–0.74) for the live format using the checklist and for live and bench formats using global ratings. Global ratings discriminated between resident levels for both formats (bench: F(2,17) = 4.45, P < 0.05; live: F(2,17) = 3.5, P < 0.05), checklists did not. Conclusion This preliminary study suggests that the Objective Structured Assessment of Technical Skill can reliably and validly assess surgical skills. Global ratings are a better method of assessment than task-specific checklists. Bench model simulation gives equivalent results to use of live animals for this test format.
Many researchers and educators have identified self-assessment as a vital aspect of professional self-regulation.1,2,3 This rationale has been the expressed motivation for a large number of studies of self-assessment ability … Many researchers and educators have identified self-assessment as a vital aspect of professional self-regulation.1,2,3 This rationale has been the expressed motivation for a large number of studies of self-assessment ability in medical education, health professional education, and professions education generally. Unfortunately, the outcome of most studies would seem to cast doubt on the capacity for self-assessment, with the majority of authors concluding that self-assessment is, in fact, quite poor.4 In a recent article, Ward and colleagues suggested that this conclusion must be questioned because the methodologies used to evaluate self-assessment are fraught with methodological weaknesses.4 However, even studies that have attempted to address the weaknesses within the methodological paradigm have produced little evidence for effective self-assessment.5 Thus, the health professional education community is left with a conundrum that can only be resolved by deciding either that the conclusions of the studies are wrong, or that a critical premise underlying the concept of "self-regulation" in the professions is unsupportable. The current paper addresses this conundrum by arguing that there is a problem with the literature on self-assessment, and that this problem is more fundamental than a list of easily correctable methodological flaws. Rather, the roots of the problem in the self-assessment literature involve a failure to effectively conceptualize the nature of self-assessment in the daily practice of health care professionals, and a failure to properly explicate the role of self-assessment in a self-regulating profession. Until such an articulation of self-assessment is elaborated, it is difficult to know even which literatures might be informative in addressing this issue, and impossible to develop programs of research that operationalize the concept of self-assessment ability in a form that can be effectively studied. Thus, we will begin with a brief reflection on the various functions of self-assessment for a practicing health care professional and the manner in which these functions operate. The Purposes of Self-Assessment in Practice Self-assessment has been defined broadly as the involvement of learners in judging whether or not learner-identified standards have been met.6 While attractive due to their concise and encompassing nature, we fear that such simple definitions risk being misleading as they can cause underappreciation of the complexities of the construct. Self-assessment functions both as a mechanism for identifying one's weaknesses and as a mechanism for identifying one's strengths. Each of these mechanisms can be considered to have distinct, albeit complementary, functions. As a mechanism for identifying weaknesses or gaps in one's skills and abilities, self-assessment serves several potential functions. First, in daily practice, the identification of one's weaknesses allows the professional to self-limit in areas of limited competence. For example, in many circumstances the professional can quickly reject certain plans of action because she recognizes that she is unlikely to be able to complete the component tasks necessary to enact the plan. In other circumstances, a professional might recognize that he is "over his head" in a particular case and decide that it is time to recruit additional resources: to "look this up," to obtain a consultation, to recruit additional support, or to refer the problem to another individual who is more competent in this domain. Second, in reflecting on one's practice in general, the ability to identify weaknesses can serve the function of helping the professional set appropriate learning goals. That is, the traditional model of self-regulated continuing professional development presumes that an individual will select ongoing learning activities that fill professional gaps, but this presumes that the professional can effectively self-assess. Thus, in this role, the identification of weakness can help a professional to decide what must be learned. As a corollary to this, effective self-assessment is vital for setting realistic expectations of oneself, to avoid setting oneself up for failure. Thus, the identification of weakness also helps the self-regulating professional to decide what not to try learning, what should be accepted as forever outside one's scope of competent practice. There is a complementary set of functions served by the ability to accurately self-assess one's strengths. First, in daily practice, having a clear and accurate sense of one's strengths allows the professional to act with appropriate confidence. For example, knowing one's strengths provides the professional with the confidence to move forward on a fitting plan of action without inappropriate hesitation or trepidation. Similarly, it ensures that the individual will choose to persist on an appropriate plan of action in the face of initially negative feedback. The right path is not always smooth even if it is right, and early abandonment of an appropriate plan of action is as costly as selecting an inappropriate plan in the first place. Second, when reflecting on one's practice in general, an appropriate assessment of one's strengths ensures that one can set appropriately challenging learning goals, pushing the edges of one's knowledge rather than choosing professional development courses that merely reiterate what one already knows. At the same time, by knowing one's strengths, a professional can select learning objectives that are within her grasp, and therefore will be able to enjoy the motivational influence of attaining her goals and experience the satisfaction of a job well done. Together, then, the ability to accurately assess one's weaknesses and one's strengths generates a capacity for finding an effective balance both in daily practice and in setting personal learning goals. In daily practice, it generates a balance of confidence and caution, of persistence and flexibility, of experimentation and safety, and of independence and collaboration. In establishing learning goals, it generates a balance of learning enough but not too much, of starting neither too high nor too low, of knowing what to tackle and what to abandon. And in reflecting on accomplishments, it generates a balance of satisfaction and incentive, of self reward without self delusion. In order to fulfill these various functions, it seems that self-assessment must be effectively enacted in three forms: summatively, predictively, and concurrently. Enacting self-assessment summatively, a professional must reflect on completed performances both for the purposes of assessing the specific performance and for the purposes of assessing his abilities generally. When evaluating performance on a particular task, the professional can often assess the overall quality of the completed job as a question that may come in various forms. That is, the individual might ask how good this performance was relative to what she could have done; relative to what her peers might typically do; relative to the best that could have been done (a gold standard); or relative to some minimally acceptable standard. Alternatively, there are some situations where the mechanisms for objectively assessing the outcome are not immediately available, in which case the professional might ask herself how confident she is in the conclusion or outcome generated (is it right? will it stand up? could there have been a better solution given the situation?). The professional might then use her assessment of the specific task to draw summative conclusions about herself or her abilities in this domain generally. Again, such conclusions may be in absolute terms (am I good enough in this domain? am I minimally competent?) or in relative terms (am I average, above average, or below average, and against whom should I be comparing myself?). In drawing general conclusions about her abilities from a particular performance, the professional must also make determinations about whether this particular episode should be taken as an appropriate reflection of her general skills: were there extenuating circumstances that led to a particularly poor (or good) performance that might lead one to discount this outcome as reflective of overall ability? In addition to these summative functions, self-assessment must be used predictively. Professionals are constantly required to assess their likely ability to manage newly arising situations and challenges. In this predictive role, self-assessment leads to questions such as: Am I up to this challenge? Should I be starting this task (now, alone, in this way)? What are realistic goals for accomplishment in this context (what would I consider to be a good or acceptable outcome for me)? How much better might I imagine performing with some additional preparation and is the increased preparation worth the anticipated increase in performance? What additional resources should I recruit (either internally or from the outside) to complement my strengths and shore up my weaknesses? Finally, self-assessment plays a vital role in its concurrent mode of functioning. In this concurrent mode, self-assessment acts as an ongoing monitoring process during the performance of a task. It is self-assessment in its concurrent mode that leads to questions such as: Is this coming out the way I expected? Am I still on the right track? Am I in trouble? Should I be doing anything differently? Should I persist in the face of negative feedback from the situation (that things are not going the way I thought they would or as easily as I thought they would)? Do I need to recruit additional resources (internal resources such as attention or external resources such as advice/assistance)? Do I need to reassess my original goal or my original plan? Thus, self-assessment is a complicated, multifaceted, multipurpose phenomenon that involves a number of interacting cognitive processes. It functions as a monitor, a mentor, and a motivator through processes such as evaluation, inference, and prediction. Given this elaborated description of self-assessment, it is unlikely that simplistic questions such as "are health professional trainees effective self-assessors?" will lead to insightful discoveries about the nature and value of self-assessment. Rather, researchers must ask questions such as: On what basis do individuals make these decisions? What factors affect their reasoning? How fine tuned does the assessment need to be in order to be useful? A first step toward addressing these questions must be to determine who is already asking them and what insights we may borrow from their discoveries and reflections. Our search has led us to several literatures that seem particularly relevant: self-efficacy and self-concept; cognitive and metacognitive theory; social cognition; models of expert performance and the development of expertise; and the concept of reflective practice. In the following sections we will briefly touch on each of these literatures and suggest how they might inform our understanding of self-assessment. Our intent here is not to provide a systematic review of each literature, but to provide an overview of questions being addressed by researchers outside medical education that should inform our conception of self-assessment as a regulatory strategy. For each new literature we will define the area, provide examples of the issues under consideration, and then summarize the implications for self-assessment in the professions. We will end with a proposal for a program of research that has the potential to move the field beyond our current paradigm of repeatedly concluding that self-assessment is generically poor. Self-Efficacy and Self-Concept In studying the accuracy of self-assessments, education researchers in the health professions have tended to focus conceptually on what we have labeled the summative function – the ability to draw general conclusions about one's skills or knowledge in specific domains: How well do I understand endometriosis? Am I able to communicate effectively with other members of the health care team? Practically, this has usually been operationalized in research studies as a request that students try to estimate how well they will/did perform on an immediately following/preceding task. Yet, there is an important distinction between general assessments of one's ability in an area and the more specific question of how one did on a particular task. Researchers in the field of personality theory, for example, usefully distinguish between judgments of self-efficacy and the development of self-concept. Self-efficacy is the belief in one's capabilities to recruit the resources and execute the actions required to manage prospective situations. Self-concept is the relatively sweeping cognitive appraisal of oneself that is integrated across various dimensions.7 Thus, self-concept beliefs are context free, generalized judgments of self-worth that involve cognitive self-appraisals independent of a specific task or goal (but not necessarily independent of domain). By contrast, self-efficacy is a context specific assessment of competence to perform a specific task or range of tasks in a given domain (i.e., an individual's judgment of her capabilities to complete a given goal). Self-efficacy is, by its very nature, driven by an interaction between self-concept beliefs about one's skills or abilities and the specific context in which those skills or abilities will be applied for the attainment of the particular goal. It is concerned with the contextually embedded orchestration of skills that lead to performance. Self-efficacy differs importantly from the concept of self-assessment as currently envisioned in the health professions education literature in that self-efficacy is not only influenced by direct and indirect feedback, but also influences the future performance of tasks (the choices we make, the effort we put forth, how long we persist when confronted with obstacles or in the face of failure). Thus, there is an important reciprocity between self-efficacy and success. Not only will success lead to a strong sense of self-efficacy, but self-efficacy will also lead to an increased likelihood of success. Self-efficacy beliefs are not merely passive reflections of performance, but part of a self-fulfilling prophecy that affects performance. As a result, there is an advantage to high self-efficacy beliefs even in circumstances where such beliefs may not be warranted by past performance. Clearly there is a logical disadvantage to continually overestimating one's abilities, but this obvious disadvantage must be balanced with the value of believing that one can achieve more than one has in the past and that one can manage the challenges that one will face.8 As a result, researchers in the field of self-efficacy appear to be less worried about the "accuracy of self-assessment" and more worried about its impact on impending problem solving situations. They unconcernedly alter the situational self-efficacy of study participants through manipulations such as: varying the order in which people consider hypothetical levels of future performance,9 having subjects contemplate various positive or negative performance-related factors,10 altering the "anchor" values representing high or low levels of performance,11 or providing false performance feedback.12 Such manipulations regularly alter subjects' expectations of success on future events within the context of the study, suggesting that subjects will take contextual information into account when judging (either explicitly or implicitly) the likelihood of future success on tasks within that context. Again, for researchers engaged in the study of self-efficacy, the important point to be taken from these studies is that "trivial" factors alter self-efficacy and can affect future performance.13 For them, the fact that one can radically alter an individual's self-assessment of future performance appears to be simply taken for granted, rendering the question of "accuracy" somewhat nonsensical. Early on, Bandura provided a taxonomy of origins from whence information that would influence self-efficacy could be received.14 It included personal experience, vicarious experience, verbal persuasion, and physiological state. In addition, Cervone has argued that fundamental cognitive mechanisms (including common heuristics, as will be discussed in the next section) will influence the extent to which information from any given source will be weighed.13 In general, Cervone argues that self-efficacy judgments are not simply driven by an active, motivated distortion of facts in the service of ego protection ("hot cognition"), but rather that fundamental cognitive processes (i.e., those regularly used for a wide variety of judgment tasks – "cold cognition") influence self-efficacy beliefs quite independently. Overall then, it appears that researchers in the self-efficacy literature offer several theoretical and methodological approaches that can inform research in self-assessment. They acknowledge, in fact presume, the instability and situational specificity of self-reflective judgments, they examine and explicitly manipulate the factors that affect these judgments, and they concern themselves with the consequences of these judgments for future behavior. Cognitive and Metacognitive Theory In contrast to the focus on "accuracy" in the self-assessment literature and the focus on "consequences" in the self-efficacy literature, cognitive psychologists interested in metacognition (knowledge of one's own knowledge) tend to focus on delineating the mechanisms that allow us to mentally supervise and control the way in which we process information. Of particular interest for our purposes are questions of how people form metacognitive judgments, and what cues influence people's judgments of how well they have learned something. It is a fundamental assumption of this work that we do not have direct introspective access to our own memories or knowledge base. Rather, just as we must infer others' level of knowledge and motivations from their behaviors and other cues, so too we must use peripheral cues to make inferences about our own level of knowledge and learning. In fact, it is argued that our judgments of our own abilities are often based on the same inferential cognitive strategies, or heuristics, that we use to judge others. For example, the easier it is to process a piece of information, the more likely we are to judge that we will remember that information later (a fluency heuristic).15 Such heuristics are cognitive short-cuts that make us extremely effective and efficient at operating within a complex world despite our limited mental resources. However, they can also bias us in a way that leaves us susceptible to errors in decision making and, when applied to ourselves, errors in trying to identify our own strengths and weaknesses. Studies from this field suggest that, when trying to judge one's ability in a domain or when trying to judge the likelihood of success on a task, the accuracy of these metacognitive judgments is dependent on the extent to which the apparent difficulty of learning mimics the actual difficulty of eventually retrieving the learned material from memory. For example, research demonstrates that, when people are trying to learn a piece of information (such as a list of words) for later recall, several factors affect their judgments of having succeeded in their learning efforts. Metacognitive judgments are more accurate if the repetitions of each word are spaced apart and interspersed with other words than if repetitions of each word are blocked together.16 People appear to use the cue of fluency (i.e., ease of understanding) in judging the extent to which they have learned material and, as such, overestimate the amount they have learned when fluency is increased by blocking repetitions together. Similarly, metacognitive judgments are more accurate when there is a delay between study of the words and efforts to recall during practice. In general, people overestimate their learning if the words are blocked or if recall follows too closely on study of the words, because these forms of the task are easier than the actual task they will eventually be expected to perform (recall after a long delay). The harder the retrieval task during the learning period, the better the predictions of the amount of learning that took place.17 Importantly, however, merely mixing the list and delaying recall during practice are often insufficient to improve metacognition if people are left to their own devices during learning. That is, in order to recognize one's inability to recall the words it is necessary to actually try to recall them and make explicit mistakes in retrieval. Without these explicit errors as feedback, people continue to overestimate their ability to recall the words. Interestingly, participants are unlikely to spontaneously induce in themselves the failures that enable better judgments of learning. For example, judgments of learning tend to be more accurate after participants are forced to provide a response and produce the wrong word than if they are allowed to say, "I don't remember."18 This suggests that, without external pressure to do so, the participants did not try and fail, but rather simply did not try, and in doing so, missed an important cue that they might have used to improve their self-assessments. This finding is consistent with the higher correlations between performance and self-assessment seen in the health professions literature when judgments are elicited postperformance relative to preperformance.5 Taken as a whole, the findings from this literature emphasize the importance of moving beyond questions of "can people self-assess accurately" to ones that explicitly focus on the various factors that affect judgments of learning or knowledge or ability. In the absence of direct access to our mental states, we are forced to make metacognitive judgments based on a variety of internal and external cues. Metacognitive judgments tend to be more accurate when these cues accurately reflect the factors that affect subsequent performance,19 but there are many instances in which the cues used for judgments of learning lack predictive validity or worse yet, induce systematic discrepancies between predicted performance and actual performance.20 A better understanding of which cues are used and which ones should be used in health professional education contexts (as well as the impact these cues have on study habits) might better guide training strategies and improve our understanding of the concept of self-regulation. Some insight into the types of cues that are often misleading in real world situations (and reasons for the lack of insight into the inappropriate use of specific cues) has been gained from researchers working in the field of social cognition, the focus of the next section. Social Cognition Research in social psychology has led many to conclude that much of what we want to know about ourselves resides outside of conscious awareness.21 Each of us possesses an adaptive unconscious that guides much of our behavior, motivations, and feelings. This part of the mind is labeled unconscious because although we have privileged access to the contents (current thoughts, memories, and objects of attention), we do not enjoy such access to the mental processes that are engaged. We have a tendency to confabulate explanations for our behaviors,22 but these explanations are often inference-based and no more trustworthy than are introspections about the inner workings of our kidneys.23 This unconscious is adaptive because there are benefits to naivety. Most people believe themselves to be more popular, a better driver etc., than the average person.24 While it is logically impossible that we are all above average, at the individual level such positive self-deceptions can be beneficial in practice; individuals who maintain such illusions are less likely to be depressed and more likely to persist at (and succeed on) difficult tasks.25 Gilbert and Wilson talk of the psychological immune system, highlighting the great lengths we will travel to maintain a sense of well being, rationalizing and justifying threatening information.26 How we rationalize is somewhat idiosyncratic, but Gilovich, as one example, offers a number of mechanisms (some motivated, some inherent in fundamental cognitive processes) by which intelligent, thoughtful people can develop and maintain erroneous beliefs, many of which are relevant to an appreciation of what is required to accurately self-assess one's own strengths and weaknesses.27 As one example, Gilovich describes gambling tendencies and presents evidence that counters the common belief that gamblers think they can beat the odds because they ignore or forget their losses. On the contrary, gamblers focus more attention on their losses and remember them better than wins. They maintain the belief that they are successful, however, by discounting the losses, focusing on the reasons why they should have won if not for some fluke event (e.g., the quarterback being injured). As a result, gamblers come to think of losses as "near wins" and thus maintain the belief that they can beat the odds. Learners likely find themselves in a similar situation. It is very easy to maintain an inaccurate perception of one's own ability by making claims like "I knew the answer, but read the question wrong" or "Wow, I made a lucky guess in response to that question." This tendency to discount conflicting information, combined with the rarity of corrective feedback increases the likelihood that flaws in reasoning will be reinforced. Given that the ultimate goal of self-assessment is actually to avoid such biased images of oneself, social psychologists suggest that it is necessary to look outward at one's own behavior and how others react to it rather than simply reflecting inward.28 When reflecting on our knowledge and abilities we have a great deal of information available to us that is not available to anyone else (e.g., private knowledge/idiosyncratic theories), but our phenomenal capacity for discounting distortions that do not fit with our perception of reality can render illusory the feeling of triangulation that additional information provides, thereby resulting in a misleading feeling of confidence in the accuracy of our judgments.28 Sources of such illusions include a tendency we have to find more exceptions than truly exist, placing undue weight on apparently unusual factors,29 and being more likely than external observers to overlook situational influences on our actions, the tendency to do so being broadly recognized as the fundamental attribution error.30 This creates a paradox for self-regulating professionals in that it suggests one must systematically and intentionally elicit the views of others (both explicit opinion and implicit reaction) in order to fully develop an accurate impression of oneself. Without question the perceptions of others are also prone to distortions, but the more heterogeneous the sources of information, the less susceptible our self-concept might be to biased search for confirmation. This notion that self-assessment is insufficient for the evolution of accurate self-concept is consistent with the finding that peers tend to be better predictors of performance than do individuals rating themselves, both in health sciences,31 and social psychology.32 This view again raises questions about self-assessment quite distinct from the simple question of accuracy that has preoccupied self-assessment researchers in professional education. To what extent do health care practitioners seek out assessments from others? What prompts them to do so? How can we optimally supplement self-assessments with the views of others to create a coherent and appropriate sense of self? To what extent is coaching/mentoring/peer evaluation necessary/beneficial for such achievements to be reached? This latter question has been a major focus in determining the characteristics of expert performance. Models of Expert Performance and the Development of Expertise Some social psychologists have argued that the adaptive unconscious is largely a pattern detector whereas the conscious serves more as a fact checker.33 Taken broadly, this is also consistent with current models of proficient clinical reasoning, a construct being characterized as the flexible adaptation of multiple approaches to reasoning including both a nonanalytic, Gestalt-like, consideration of new cases, and a more carefully controlled (i.e., analytic) consideration of specific features.34,35 In the current context, the question of interest is what role, if any, does this conscious or analytic process of self-regulation play in the development and maintenance of expertise. In narrowly focusing the study of expertise on replicable elite performance, Ericsson and colleagues have been able to demonstrate on repeated occasions (and in diverse domains) the importance of deliberate practice – effortful, individualized training on specific tasks selected by qualified teachers.36 Deliberate practice is distinct from the enjoyable state of play (characterized as flow—giving up reflective control)37 and work (leading to immediate monetary and/or social rewards). Central to its definition is the presence of an instructor who can push students beyond their current ability by pointing to problems or novel approaches that are likely to go undetected if one relies solely on self-direction. In fact, notable by its absence in all domains of expertise except the health professions is an emphasis on self-directed learning. Contrary to popular belief, the role of early instruction and maxima
Background: Preparing healthcare professionals for teaching is regarded as essential to enhancing teaching effectiveness. Although many reports describe various faculty development interventions, there is a paucity of research demonstrating their … Background: Preparing healthcare professionals for teaching is regarded as essential to enhancing teaching effectiveness. Although many reports describe various faculty development interventions, there is a paucity of research demonstrating their effectiveness.Objective: To synthesize the existing evidence that addresses the question: "What are the effects of faculty development interventions on the knowledge, attitudes and skills of teachers in medical education, and on the institutions in which they work?"Methods: The search, covering the period 1980–2002, included three databases (Medline, ERIC and EMBASE) and used the keywords: staff development; in-service training; medical faculty; faculty training/development; continuing medical education. Manual searches were also conducted.Articles with a focus on faculty development to improve teaching effectiveness, targeting basic and clinical scientists, were reviewed. All study designs that included outcome data beyond participant satisfaction were accepted. From an initial 2777 abstracts, 53 papers met the review criteria.Data were extracted by six coders, using the standardized BEME coding sheet, adapted for our use. Two reviewers coded each study and coding differences were resolved through discussion.Data were synthesized using Kirkpatrick's four levels of educational outcomes. Findings were grouped by type of intervention and described according to levels of outcome. In addition, 8 high-quality studies were analysed in a 'focused picture'.Results: The majority of the interventions targeted practicing clinicians. All of the reports focused on teaching improvement and the interventions included workshops, seminar series, short courses, longitudinal programs and 'other interventions'. The study designs included 6 randomized controlled trials and 47 quasi-experimental studies, of which 31 used a pre-test–post-test design.Key points: Despite methodological limitations, the faculty development literature tends to support the following outcomes: Overall satisfaction with faculty development programs was high. Participants consistently found programs acceptable, useful and relevant to their objectives.Participants reported positive changes in attitudes toward faculty development and teaching.Participants reported increased knowledge of educational principles and gains in teaching skills. Where formal tests of knowledge were used, significant gains were shown.Changes in teaching behavior were consistently reported by participants and were also detected by students.Changes in organizational practice and student learning were not frequently investigated. However, reported changes included greater educational involvement and establishment of collegiate networks.Key features of effective faculty development contributing to effectiveness included the use of experiential learning, provision of feedback, effective peer and colleague relationships, well-designed interventions following principles of teaching and learning, and the use of a diversity of educational methods within single interventions.Methodological issues: More rigorous designs and a greater use of qualitative and mixed methods are needed to capture the complexity of the interventions. Newer methods of performance-based assessment, utilizing diverse data sources, should be explored, and reliable and valid outcome measures should be developed. The maintenance of change over time should also be considered, as should process-oriented studies comparing different faculty development strategies.Conclusions: Faculty development activities appear highly valued by participants, who also report changes in learning and behavior. Notwithstanding the methodological limitations in the literature, certain program characteristics appear to be consistently associated with effectiveness. Further research to explore these associations and document outcomes, at the individual and organizational level, is required.
Outcomes-based education in the health professions has emerged as a priority for curriculum planners striving to align with societal needs. However, many struggle with effective methods of implementing such an … Outcomes-based education in the health professions has emerged as a priority for curriculum planners striving to align with societal needs. However, many struggle with effective methods of implementing such an approach. In this narrative, we describe the lessons learned from the implementation of a national, needs-based, outcome-oriented, competency framework called the CanMEDS initiative of The Royal College of Physicians and Surgeons of Canada.We developed a framework of physician competencies organized around seven physician "Roles": Medical Expert, Communicator, Collaborator, Manager, Health Advocate, Scholar, and Professional. A systematic implementation plan involved: the development of standards for curriculum and assessment, faculty development, educational research and resources, and outreach.Implementing this competency framework has resulted in successes, challenges, resistance to change, and a list of essential ingredients for outcomes-based medical education.A multifaceted implementation strategy has enabled this large-scale curriculum change for outcomes-based education.
This article in the Medical Education series provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of … This article in the Medical Education series provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of each method, and identifies several major challenges in assessing professional competence and performance.
In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop … In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards. Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales. These seven steps do not address all aspects of survey design, nor do they represent the only way to develop a high-quality questionnaire. Instead, these steps synthesize multiple survey design techniques and organize them into a cohesive process for questionnaire developers of all levels. Addressing each of these steps systematically will improve the probabilities that survey designers will accurately measure what they intend to measure.
Realizing medical education is on the brink of a major paradigm shift from structure- and process-based to competency-based education and measurement of outcomes, the authors reviewed the existing medical literature … Realizing medical education is on the brink of a major paradigm shift from structure- and process-based to competency-based education and measurement of outcomes, the authors reviewed the existing medical literature to provide practical insight into how to accomplish full implementation and evaluation of this new paradigm. They searched Medline and the Educational Resource Information Clearinghouse from the 1960s until the present, reviewed the titles and abstracts of the 469 articles the search produced, and chose 68 relevant articles for full review. The authors found that in the 1970s and 1980s much attention was given to the need for and the development of professional competencies for many medical disciplines. Little attention, however, was devoted to defining the benchmarks of specific competencies, how to attain them, or the evaluation of competence. Lack of evaluation strategies was likely one of the forces responsible for the three-decade lag between initiation of the movement and wide-spread adoption. Lessons learned from past experiences include the importance of strategic planning and faculty and learner buy-in for defining competencies. In addition, the benchmarks for defining competency and the thresholds for attaining competence must be clearly delineated. The development of appropriate assessment tools to measure competence remains the challenge of this decade, and educators must be responsible for studying the impact of this paradigm shift to determine whether its ultimate effect is the production of more competent physicians.
ContextCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, … ContextCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.ObjectivesTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.Data SourcesWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.Study SelectionWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.Data ExtractionData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.Data SynthesisWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.ConclusionsIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.
Although competency-based medical education (CBME) has attracted renewed interest in recent years among educators and policy-makers in the health care professions, there is little agreement on many aspects of this … Although competency-based medical education (CBME) has attracted renewed interest in recent years among educators and policy-makers in the health care professions, there is little agreement on many aspects of this paradigm. We convened a unique partnership – the International CBME Collaborators – to examine conceptual issues and current debates in CBME.We engaged in a multi-stage group process and held a consensus conference with the aim of reviewing the scholarly literature of competency-based medical education, identifying controversies in need of clarification, proposing definitions and concepts that could be useful to educators across many jurisdictions, and exploring future directions for this approach to preparing health professionals.In this paper, we describe the evolution of CBME from the outcomes movement in the 20th century to a renewed approach that, focused on accountability and curricular outcomes and organized around competencies, promotes greater learner-centredness and de-emphasizes time-based curricular design. In this paradigm, competence and related terms are redefined to emphasize their multi-dimensional, dynamic, developmental, and contextual nature. CBME therefore has significant implications for the planning of medical curricula and will have an important impact in reshaping the enterprise of medical education.We elaborate on this emerging CBME approach and its related concepts, and invite medical educators everywhere to enter into further dialogue about the promise and the potential perils of competency-based medical curricula for the 21st century.
All assessments in medical education require evidence of validity to be interpreted meaningfully. In contemporary usage, all validity is construct validity, which requires multiple sources of evidence; construct validity is … All assessments in medical education require evidence of validity to be interpreted meaningfully. In contemporary usage, all validity is construct validity, which requires multiple sources of evidence; construct validity is the whole of validity, but has multiple facets. Five sources--content, response process, internal structure, relationship to other variables and consequences--are noted by the Standards for Educational and Psychological Testing as fruitful areas to seek validity evidence.The purpose of this article is to discuss construct validity in the context of medical education and to summarize, through example, some typical sources of validity evidence for a written and a performance examination.Assessments are not valid or invalid; rather, the scores or outcomes of assessments have more or less evidence to support (or refute) a specific interpretation (such as passing or failing a course). Validity is approached as hypothesis and uses theory, logic and the scientific method to collect and assemble data to support or fail to support the proposed score interpretations, at a given point in time. Data and logic are assembled into arguments--pro and con--for some specific interpretation of assessment data. Examples of types of validity evidence, data and information from each source are discussed in the context of a high-stakes written and performance examination in medical education.All assessments require evidence of the reasonableness of the proposed interpretation, as test data in education have little or no intrinsic meaning. The constructs purported to be measured by our assessments are important to students, faculty, administrators, patients and society and require solid scientific evidence of their meaning.
We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, … We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment.Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence.When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective.Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.
Although physicians report spending a considerable amount of time in continuing medical education (CME) activities, studies have shown a sizable difference between real and ideal performance, suggesting a lack of … Although physicians report spending a considerable amount of time in continuing medical education (CME) activities, studies have shown a sizable difference between real and ideal performance, suggesting a lack of effect of formal CME.To review, collate, and interpret the effect of formal CME interventions on physician performance and health care outcomes.Sources included searches of the complete Research and Development Resource Base in Continuing Medical Education and the Specialised Register of the Cochrane Effective Practice and Organisation of Care Group, supplemented by searches of MEDLINE from 1993 to January 1999.Studies were included in the analyses if they were randomized controlled trials of formal didactic and/or interactive CME interventions (conferences, courses, rounds, meetings, symposia, lectures, and other formats) in which at least 50% of the participants were practicing physicians. Fourteen of 64 studies identified met these criteria and were included in the analyses. Articles were reviewed independently by 3 of the authors.Determinations were made about the nature of the CME intervention (didactic, interactive, or mixed), its occurrence as a 1-time or sequenced event, and other information about its educational content and format. Two of 3 reviewers independently applied all inclusion/exclusion criteria. Data were then subjected to meta-analytic techniques.The 14 studies generated 17 interventions fitting our criteria. Nine generated positive changes in professional practice, and 3 of 4 interventions altered health care outcomes in 1 or more measures. In 7 studies, sufficient data were available for effect sizes to be calculated; overall, no significant effect of these educational methods was detected (standardized effect size, 0.34; 95% confidence interval [CI], -0.22 to 0.97). However, interactive and mixed educational sessions were associated with a significant effect on practice (standardized effect size, 0.67; 95% CI, 0.01-1.45).Our data show some evidence that interactive CME sessions that enhance participant activity and provide the opportunity to practice skills can effect change in professional practice and, on occasion, health care outcomes. Based on a small number of well-conducted trials, didactic sessions do not appear to be effective in changing physician performance.
The introduction of competency-based postgraduate medical training, as recently stimulated by national governing bodies in Canada, the United States, the United Kingdom, The Netherlands, and other countries, is a major … The introduction of competency-based postgraduate medical training, as recently stimulated by national governing bodies in Canada, the United States, the United Kingdom, The Netherlands, and other countries, is a major advancement, but at the same time it evokes critical issues of curricular implementation. A source of concern is the translation of general competencies into the practice of clinical teaching. The authors observe confusion around the term competency, which may have adverse effects when a teaching and assessment program is to be designed. This article aims to clarify the competency terminology. To connect the ideas behind a competency framework with the work environment of patient care, the authors propose to analyze the critical activities of professional practice and relate these to predetermined competencies. The use of entrustable professional activities (EPAs) and statements of awarded responsibility (STARs) may bridge a potential gap between the theory of competency-based education and clinical practice. EPAs reflect those activities that together constitute the profession. Carrying out most of these EPAs requires the possession of several competencies. The authors propose not to go to great lengths to assess competencies as such, in the way they are abstractly defined in competency frameworks but, instead, to focus on the observation of concrete critical clinical activities and to infer the presence of multiple competencies from several observed activities. Residents may then be awarded responsibility for EPAs. This can serve to move toward competency-based training, in which a flexible length of training is possible and the outcome of training becomes more important than its length.
There are many theories that explain how adults learn and each has its own merits. This Guide explains and explores the more commonly used ones and how they can be … There are many theories that explain how adults learn and each has its own merits. This Guide explains and explores the more commonly used ones and how they can be used to enhance student and faculty learning. The Guide presents a model that combines many of the theories into a flow diagram which can be followed by anyone planning learning. The schema can be used at curriculum planning level, or at the level of individual learning. At each stage of the model, the Guide identifies the responsibilities of both learner and educator. The role of the institution is to ensure that the time and resources are available to allow effective learning to happen. The Guide is designed for those new to education, in the hope that it can unravel the difficulties in understanding and applying the common learning theories, whilst also creating opportunities for debate as to the best way they should be used.
Effective leadership and management of educational organizations and activities are vital for the continued success and development of the processes by which existing and future doctors are educated. Poor leadership … Effective leadership and management of educational organizations and activities are vital for the continued success and development of the processes by which existing and future doctors are educated. Poor leadership and weak management are two of the greatest contributors to organizational decline. It is essential to consider leadership and management somewhat separately. Although the two are interrelated and effective organizations need both, the specific skill sets involved in leadership and management are quite different. The chapter explores leadership and management theories, their chronology and how these theories relate to medical education. Taking an international perspective, the chapter will consider some of the skills, qualities, competences, perspectives and approaches that enable medical educators to lead and manage organizations and teams more effectively. Leadership is a hotly contested and ever-evolving concept, but it is when the theories translate into practical realities that educators can best use the vast range of guidance. Much of the literature derives from the business context, although there is an emerging body of research and writing on clinical and educational leadership in the medical and wider healthcare context. The literature relating to medical education focuses on both the clinical and the educational contexts, and this chapter draws on relevant theory and examples throughout. Finally, it is important to note that the literature on leadership has its academic roots in the social, behavioural, and management sciences. The evidence that is provided, the research methods utilized and the way in which leadership is discussed and conceptualized is different from that of some other educational topics, which draw more strongly from the natural sciences and positivist traditions.
The aim of this study is to review the literature on known barriers and solutions that face educators when developing and implementing online learning programs for medical students and postgraduate … The aim of this study is to review the literature on known barriers and solutions that face educators when developing and implementing online learning programs for medical students and postgraduate trainees.
This Viewpoint discusses ways the coronavirus pandemic is forcing change onto graduate medical education, including online implementation of preclerkship curricula and alternatives to in-person patient experiences in clinical clerkship rotations. This Viewpoint discusses ways the coronavirus pandemic is forcing change onto graduate medical education, including online implementation of preclerkship curricula and alternatives to in-person patient experiences in clinical clerkship rotations.
Objectives To investigate perceptions of medical students on the role of online teaching in facilitating medical education during the COVID-19 pandemic. Design Cross-sectional, online national survey. Setting Responses collected online … Objectives To investigate perceptions of medical students on the role of online teaching in facilitating medical education during the COVID-19 pandemic. Design Cross-sectional, online national survey. Setting Responses collected online from 4 th May 2020 to 11 th May 2020 across 40 UK medical schools. Participants Medical students across all years from UK-registered medical schools. Main outcome measures The uses, experiences, perceived benefits and barriers of online teaching during the COVID-19 pandemic. Results 2721 medical students across 39 medical schools responded. Medical schools adapted to the pandemic in different ways. The changes included the development of new distance-learning platforms on which content was released, remote delivery of lectures using platforms and the use of question banks and other online active recall resources. A significant difference was found between time spent on online platforms before and during COVID-19, with 7.35% students before versus 23.56% students during the pandemic spending &gt;15 hours per week (p&lt;0.05). The greatest perceived benefits of online teaching platforms included their flexibility. Whereas the commonly perceived barriers to using online teaching platforms included family distraction (26.76%) and poor internet connection (21.53%). Conclusions Online teaching has enabled the continuation of medical education during these unprecedented times. Moving forward from this pandemic, in order to maximise the benefits of both face-to-face and online teaching and to improve the efficacy of medical education in the future, we suggest medical schools resort to teaching formats such as team-based/problem-based learning. This uses online teaching platforms allowing students to digest information in their own time but also allows students to then constructively discuss this material with peers. It has also been shown to be effective in terms of achieving learning outcomes. Beyond COVID-19, we anticipate further incorporation of online teaching methods within traditional medical education. This may accompany the observed shift in medical practice towards virtual consultations.
Chapter one focuses on getting you comfortable with the dissertation process. The authors break down the dissertation into 11 simple steps and provide you with the overall structure of the … Chapter one focuses on getting you comfortable with the dissertation process. The authors break down the dissertation into 11 simple steps and provide you with the overall structure of the dissertation. In addition, this chapter presents an activity that helps you focus on your accomplishments and disappointments while in graduate school and formulate five personal guidelines that will help you improve the dissertation process.
<h3>Objective.</h3> —To review the literature relating to the effectiveness of education strategies designed to change physician performance and health care outcomes. <h3>Data Sources.</h3> —We searched MEDLINE, ERIC, NTIS, the Research … <h3>Objective.</h3> —To review the literature relating to the effectiveness of education strategies designed to change physician performance and health care outcomes. <h3>Data Sources.</h3> —We searched MEDLINE, ERIC, NTIS, the Research and Development Resource Base in Continuing Medical Education, and other relevant data sources from 1975 to 1994, using<i>continuing medical education</i>(CME) and related terms as keywords. We manually searched journals and the bibliographies of other review articles and called on the opinions of recognized experts. <h3>Study Selection.</h3> —We reviewed studies that met the following criteria: randomized controlled trials of education strategies or interventions that objectively assessed physician performance and/or health care outcomes. These intervention strategies included (alone and in combination) educational materials, formal CME activities, outreach visits such as academic detailing, opinion leaders, patient-mediated strategies, audit with feedback, and reminders. Studies were selected only if more than 50% of the subjects were either practicing physicians or medical residents. <h3>Data Extraction.</h3> —We extracted the specialty of the physicians targeted by the interventions and the clinical domain and setting of the trial. We also determined the details of the educational intervention, the extent to which needs or barriers to change had been ascertained prior to the intervention, and the main outcome measure(s). <h3>Data Synthesis.</h3> —We found 99 trials, containing 160 interventions, that met our criteria. Almost two thirds of the interventions (101 of 160) displayed an improvement in at least one major outcome measure: 70% demonstrated a change in physician performance, and 48% of interventions aimed at health care outcomes produced a positive change. Effective change strategies included reminders, patient-mediated interventions, outreach visits, opinion leaders, and multifaceted activities. Audit with feedback and educational materials were less effective, and formal CME conferences or activities, without enabling or practice-reinforcing strategies, had relatively little impact. <h3>Conclusion.</h3> —Widely used CME delivery methods such as conferences have little direct impact on improving professional practice. More effective methods such as systematic practice-based interventions and outreach visits are seldom used by CME providers. (<i>JAMA</i>. 1995;274:700-705)
Social accountability has become a key driver of change within health professional education programs worldwide. Over the last two decades, there has been a growing number of frameworks, tools, and … Social accountability has become a key driver of change within health professional education programs worldwide. Over the last two decades, there has been a growing number of frameworks, tools, and standards to help measure social accountability. This paper shares the experiencesand lessons learned from one of the institutions that participated in piloting the Institutional Self-Assessment Social Accountability Tool (ISAT). We argue that tools for measuring social accountability are valuable not only because they provide data but, more importantly, because they can embed a dialogic and critical reflective culture within institutions. We describe how the tool expanded our thinking about social accountability to advance it, including what constitutes socially accountable research and who and what fields contribute to this work. We argue that the tool encouraged collaborative critical reflexive practice and offered a process that was incorporated into institutional processes and activities, and simultaneously fostered intra-institutional cooperation. Finally, we contend that the application of quality and equity lenses offers sources of sustainability to advance social accountability in medical education as they encourage the prioritization of social accountability processes to achieve desired outcomes.
Background: Design thinking is a human-centered, systematic method for problem-solving. To address gaps in curricula for design thinking and curriculum development for medical students, we created a Design Thinking Workshop … Background: Design thinking is a human-centered, systematic method for problem-solving. To address gaps in curricula for design thinking and curriculum development for medical students, we created a Design Thinking Workshop in an existing students-as-teachers course. Our intervention aimed to teach learners two things: 1) how to apply design thinking to creatively problem solve; and 2) how to utilize design thinking as a curriculum design framework. The central aim of this innovation was to support this outcome: for learners to design curricula that meets an identified need of their learners during the students-as-teachers course. Methods: This workshop introduced the framework of design thinking and how to use design thinking tools to create curricula. Seventeen students participated and created educational projects using a design thinking approach. We evaluated the workshop using a pre/post student survey and structured rating of projects. Results: After completing the curriculum, students were more confident in identifying gaps in existing curricula and designing educational projects. After the workshop, the student curriculum projects were more likely to address a specific curricular need when compared with the student projects created during the previous year. Discussion: We found that participating in the workshop increased students’ confidence in each step of curriculum development and that, after completing the workshop, more students could design high quality educational projects by addressing a specific learner need. Despite the intervention, many students’ projects were still not “high quality”, and based on this team’s experience, more faculty support as well as near-peer mentorship can increase the quality of projects.
Introduction Medical learners face challenges affecting educational success. Unsuccessful application of didactic content can result in poor performance in the clinical phase of learning. Efforts which target standardized remediation and … Introduction Medical learners face challenges affecting educational success. Unsuccessful application of didactic content can result in poor performance in the clinical phase of learning. Efforts which target standardized remediation and application of adult learning theory are effective. Methods In 2020, a physician assistant remediation curriculum was designed to improve clinical knowledge and performance on standardized examinations using evidence-based techniques. The program developed practical and professional skills reinforcing a core set of national examination topics. We surveyed participants to assess program acceptability. To demonstrate efficacy, we determined the pass rate of the Physician Assistant National Certifying Examination (PANCE). Results The 12 learners participating in the program passed the PANCE. Participants agreed the program was well organized and well paced and provided high-quality content, the faculty were organized and demonstrated solid medical knowledge, and their advice was helpful. Discussion Providing an evidence-based remediation program embracing adult learning methods is imperative to address the complexity of today’s learners.
ABSTRACT Introduction Training in removable prosthodontics traditionally includes a preclinical laboratory component that precedes clinical exposure. Students struggle to relate the laboratory component to clinical work, because successful learning in … ABSTRACT Introduction Training in removable prosthodontics traditionally includes a preclinical laboratory component that precedes clinical exposure. Students struggle to relate the laboratory component to clinical work, because successful learning in removable prosthodontics involves many threshold concepts, some of which have been identified previously. No existing studies consider whether e‐portfolios can help educators and learners identify threshold concepts. Methods Students at a single dental school in Australia complied an e‐portfolio detailing their learning experiences in a preclinical removable prosthodontics laboratory continuum in 2023. Their responses were analyzed using deductive qualitative analysis. Results Two themes were identified (functional occlusion and functional records). Student responses associated each theme with transformative insights, knowledge integration, and bounded professional practice (three features of threshold concepts). Conclusion Analysis of e‐portfolios may be used to identify threshold concepts in dental education. Functional occlusion and records, two thematic constructs identified in our analysis, may be threshold concepts in removable prosthodontics.
Background Accreditation is a critical process to ensure educational quality and standards in medical training institutions. Internal accreditation serves as a self-regulatory mechanism to evaluate institutional performance against predefined standards. … Background Accreditation is a critical process to ensure educational quality and standards in medical training institutions. Internal accreditation serves as a self-regulatory mechanism to evaluate institutional performance against predefined standards. This study aimed to assess the quality of internal accreditation among medical students at Adama Hospital Medical College, focusing on the alignment with national and international educational standards. Methods A mixed-methods study design was employed, integrating quantitative and qualitative approaches. A structured questionnaire was administered to 320 medical students selected through stratified random sampling to collect quantitative data on their perceptions of accreditation quality. Qualitative data were gathered through focus group discussions (FGDs) and in-depth interviews (IDIs) with faculty members and students. Data analysis involved descriptive and inferential statistics for quantitative data and thematic analysis for qualitative findings. Results Quantitative findings showed that 80% of students rated the internal accreditation process as satisfactory, with notable strengths in curriculum alignment (85%) and faculty performance evaluation (82%). However, resource availability (60%) and student feedback mechanisms (58%) require improvement. Regression analysis indicated a significant positive correlation between accreditation quality and students’ academic satisfaction ( p &amp;lt; 0.05). Standard 1 was rated Excellent (100%), while Standards 3, 4, 5, 6, and 8 scored Good (80%). Standards 2, 7, and 9 received Satisfactory (60%). Overall, the assessment revealed strong foundations and identified areas needing strategic improvement to ensure continued educational quality. Qualitative findings highlighted themes including transparency in the accreditation process, the relevance of training content, and stakeholder engagement. While students appreciated regular evaluations and feedback, they expressed concerns about insufficient laboratory resources and limited clinical practice opportunities. Faculty members emphasized the need for continuous capacity building and enhanced collaboration with accreditation bodies.
Background: To understand how students engage with course activities, it is important for educators to predict students´ degree of participation. Artificial intelligence (AI) has become a valuable tool in higher … Background: To understand how students engage with course activities, it is important for educators to predict students´ degree of participation. Artificial intelligence (AI) has become a valuable tool in higher education, particularly in predicting students’ academic engagement. This study compares nine AI machine learning algorithms to determine students’ engagement in a basic medical science course and examine its correlation with their assessment scores. Altair RapidMiner studio software was employed for data visualization, calculation of correlation coefficient, and predictive analysis. Methods: We employed machine learning (ML) classification algorithms to analyze students'engagement in a Medical Parasitology course taught to first year medical students. The independent variables used included their performance scores, and their level of interaction with course materials on the learning management system, such as frequency of viewing content and completing activity. The dependent variable was students´ engagement levels across various activities. To predict students' engagement, we applied nine ML algorithms to the dataset: namely Naïve Bayes, Generalized Linear Model, Logistic Regression, Fast Large Margin, Deep Learning, Decision Tree, Random Forest, Gradient Boosted Tree, and Support Vector Machine. Their performance was evaluated using several metrics. Results: The Logistic Regression exhibited the highest performance among the models tested, achieving an accuracy of 95%, classification error of 5%, precision of 100%, recall of 88.4%, F-measure of 93.8%, sensitivity of 88.4%, specificity of 100%. Discussion and Conclusions: The number of student logins to course materials was strongly related to students´ engagement. Highly engaged students achieved better results on assessments compared to those with lower engagement. Additionally, students with minimal engagement participated less frequently in various course activities. These findings highlight the potential use of RapidMiner as an effective AI tool for educational institutions to accurately classify students as engaged or non-engaged.
Abstract Objectives The North American Society for Pediatric Gastroenterology, Hepatology, and Nutrition (NASPGHAN) Training Committee conducted a survey of recent fellowship graduates to assess their confidence in procedure performance, disease … Abstract Objectives The North American Society for Pediatric Gastroenterology, Hepatology, and Nutrition (NASPGHAN) Training Committee conducted a survey of recent fellowship graduates to assess their confidence in procedure performance, disease management, practice habits, and satisfaction with mentorship. Methods The survey was developed by the Training Committee members and distributed during the summer of 2023 to fellowship graduates who finished training between 2018 and 2023. Confidence levels regarding treating specific diseases and performing gastrointestinal procedures were assessed, including analysis comparing the data to 2015 survey results. Results The response rate was 21% (140/676). Confidence levels in the performance of most procedures and management of most diseases were high. Graduates of smaller programs reported greater confidence in performing percutaneous endoscopic gastrostomy placement and percutaneous liver biopsy. Outstanding research mentorship was reported more commonly with mentors funded via the National Institute of Health (NIH) than non‐R/K funded mentors (54% vs. 28%, p = 0.002). Outstanding clinical and career mentorship was similar between large‐sized, medium‐sized, and small‐sized programs. Preparedness for job hunting improved with time (52% vs. 30%, p = 0.005) while preparedness for advocacy work decreased (39% vs. 58%, p = 0.007). Conclusion Respondents reported high confidence in many core activities of pediatric gastroenterology. Satisfaction with research mentorship was higher for NIH‐funded mentors. Confidence in performing certain procedures declined over time possibly because some centers shifted the responsibility of those procedures to other specialties. Improved confidence in some training‐related topics such as job‐hunting preparedness coincided with changes made to the curriculum for NASPGHAN's fellows conferences.
Abstract Purpose As a requirement for accreditation, medical schools must have technical standards to outline essential abilities for admission, progression, and graduation. In the absence of national guidance, the AMA … Abstract Purpose As a requirement for accreditation, medical schools must have technical standards to outline essential abilities for admission, progression, and graduation. In the absence of national guidance, the AMA published recommendations in 2021 for schools to use “functional” technical standards language (focused on achieving outcomes), as opposed to “organic” (focused on body functions). This study benchmarks the extent to which U.S. MD- and DO-granting programs have adopted functional language and assesses public availability of technical standards. Method In 2023, the authors conducted a national cross-sectional content analysis of technical standards from all fully accredited U.S. MD- and DO-granting medical schools (N = 192) using AMA-endorsed criteria. Three technical standard domains—observation, communication, and motor—were coded as “functional,” “organic,” or “mixed,” generating a composite score for each school. Descriptive analysis was used to identify patterns and associations. Results Of 192 eligible schools, 99.4% of MD and 100.0% of DO programs provided their technical standards online; one school did not have technical standards. The mean composite score was 1.24 (95%, CI: [1.02, 1.46], SD = 1.55) out of a possible 6 for fully functional standards. MD programs were more likely to use functional language than DO programs, reflected in the higher overall mean score of 1.43 (SD = 1.59) for MD programs compared to 0.37 (SD = 1.00, P &lt; .001) for DO programs. Schools established in 2010 or after were less likely to have functional technical standards than older schools ( P = .01), and schools reporting updates to their technical standards in 2022 or later had slightly higher functional scores than schools with less recent updates. Conclusions Adoption of functional technical standards is varied. Most medical schools maintain restrictive organic language despite AMA recommendations. Greater alignment with functional standards could enhance inclusion of people with disabilities in medicine.
Background: Medical college teachers now not only provide information but also guide students to validate and identify key issues relevant to their future practice. However, a shortage of teachers in … Background: Medical college teachers now not only provide information but also guide students to validate and identify key issues relevant to their future practice. However, a shortage of teachers in basic medical science subjects is a significant problem, impacting students and the entire medical education system. Objective: This study aimed to assess the effects of teacher shortage in basic subjects on medical education in medical colleges in Bangladesh. Methods: This descriptive cross-sectional study, conducted from January to December 2021 in four medical colleges (two government and two private) in Bangladesh, included 456 subjects. Of these, 96 were basic subject teachers in medical colleges, and 360 were students in the 1st to 3rd phases of the MBBS course. Descriptive statistics and SPSS version 22.0 were utilized for data analysis. Results: In this study, all teachers (100%) and 82.22% of students expressed concerns about a shortage of teachers in their medical colleges. A significant portion of students (29.1%) believed that the shortage of teachers hindered the delivery of sufficient knowledge. Additionally, 23.05% of students reported disruptions in regular and practical classes due to the shortage of teaching staff. Among the 360 students, various suggestions were proposed, with 55.5% advocating for the appointment of more teachers in each basic subject, and 20.28% suggesting increased facilities for basic subject teachers. Conclusion: The shortage of teachers in basic subjects within medical colleges in Bangladesh significantly hampers the quality of education, leading to a lack of sufficient knowledge among students. This shortage may also have implications for their practical classes and training in the institutions. J Rang Med Col. March 2025; Vol.10, No.1: 57-62
Aims At Koç University School of Medicine, a one‐week rational pharmacotherapy (RPHM) programme, modelled after WHO 6‐step, has been introduced in the fourth‐year curriculum to improve prescription skills. For efficient … Aims At Koç University School of Medicine, a one‐week rational pharmacotherapy (RPHM) programme, modelled after WHO 6‐step, has been introduced in the fourth‐year curriculum to improve prescription skills. For efficient problem‐based learning (PBL) sessions on a prespecified topic, students need to brush up on basic pharmacology knowledge, so we implemented the British Pharmacological Society (BPS)‐Prescribing Safety Assessment (PSA) related to prescription skills in eight competencies. A survey‐based study was initiated to evaluate students' self‐confidence. Methods The study included 101 medical students in two groups (respiratory tract infections, urinary infections/sexually transmitted diseases) in 2020–2023. Students were required to take BPS‐PSA, prepare personal formularies and prove their ability to manage simulated patients in objective structured clinical examinations (OSCE) settings. A 15‐item PSA‐invigilated paper with an eight‐item practice paper was explicitly prepared by BPS to match indications, tailored according to local formularies and regulations. Self‐efficacy surveys were filled out before, during and after completion of the programme. Survey results were analysed concurrently with OSCE and BPS‐PSA performances. Results The number of students and gender distributions were similar in all groups. Average final grades ranged from 74–82 points for OSCE, and 43–59 for BPS‐PSA. All groups did well in OSCE (92.1% aggregate pass rate). Survey results showed significant increase in self‐efficacy levels measured at different task categories as the training progressed. The highest scores in PSA were for dose calculation (88.3%) and lowest for communication (33.01%). Conclusions Implementing the PSA to WHO 6‐step model provided complete assessment of learning environment and student progress. The new teaching programme supports students in developing their prescribing skills, allowing benchmarking in an international setting.
Medical and dental students experience higher-than-average prevalence of depression, anxiety, burnout, and suicidal ideation compared to the age-matched general population. Early interventions for these students can prevent escalation to more … Medical and dental students experience higher-than-average prevalence of depression, anxiety, burnout, and suicidal ideation compared to the age-matched general population. Early interventions for these students can prevent escalation to more acute mental health crises and suicide. Studies show that medical students first seek support from their peers. Our curriculum teaches students how to support both themselves and their peers prior to an acute mental health crisis. The authors designed, implemented, and evaluated a 90-minute peer-to-peer mental health training that aimed to equip first-year medical and dental students with skills and resources to intervene on behalf of a peer experiencing mental health distress. The workshop consisted of a peer-led didactic session, dyad role-play sessions, and a guided reflection. Resources included a slide deck, student handouts detailing the dyad role-plays, and pre/postsession surveys. One hundred sixty-four first-year students from Harvard Medical School and Harvard School of Dental Medicine completed the required training. Comparisons of survey responses by paired t tests indicated statistically significant increases in mean scores for eight items assessing learner confidence, and an increased sum score of six items assessing learner knowledge (mean of 5.6 postsession vs. 5.4 presession; p = .04). Our results demonstrate the feasibility and effectiveness of peer-led mental health training to increase first-year medical and dental students' related knowledge and confidence in identifying and responding to peers experiencing emotional distress. The resources developed for this training can be adapted to provide foundational mental health training at other medical and dental institutions.
In this article, the authors examine the important yet often overlooked role of specialized disability resource professionals (DRPs) in medical education. While disability inclusion has gained momentum, disparities in accommodations, … In this article, the authors examine the important yet often overlooked role of specialized disability resource professionals (DRPs) in medical education. While disability inclusion has gained momentum, disparities in accommodations, learning environments, and residency selection persist for medical students with disabilities (MSWDs). Despite national calls for institutional commitment to accessibility, only 9% of U.S. medical schools employ a dedicated DRP, leaving many MSWDs without specialized support to navigate the complexities of medical training. The authors argue that DRPs are essential not only for individual accommodation implementation, but also for institutional change, including faculty development, policy reform, and the dismantling of systemic ableism in medical education.Drawing on Bronfenbrenner's Ecological Systems Theory, the authors propose a framework for understanding the multi-level impact of DRPs from direct student support to shaping national policies. Data from the 2024 Association of American Medical Colleges (AAMC) Graduation Questionnaire (GQ) highlight significant barriers in disability support services: 15.4% of students found the accommodation process overly difficult, 27.2% refrained from seeking accommodations due to stigma, and 5.1% reported unclear institutional processes for requesting accommodations. These findings underscore the necessity of specialized DRPs to enhance transparency, streamline accommodation processes, and improve faculty preparedness to support disabled learners.The authors advocate for standardized DRP competencies in medical education to ensure consistent, high-quality disability support across institutions. Without investment in specialized DRPs, inequities in medical education may persist, undermining broader efforts toward inclusion. To create a truly accessible and equitable learning environment, medical schools should move beyond compliance and recognize DRPs as indispensable to the success of all learners.
Abstract Background: The increasing burden of gastrointestinal and liver diseases in Saudi Arabia highlights the need for a strong, standardized GI training program to produce highly skilled gastroenterologists. Historically, joint … Abstract Background: The increasing burden of gastrointestinal and liver diseases in Saudi Arabia highlights the need for a strong, standardized GI training program to produce highly skilled gastroenterologists. Historically, joint rotational training programs offered broad exposure but inconsistent mentorship and evaluation. Recently, center-based independent fellowships were introduced to address these challenges, although disparities between high- and low-volume centers persist. Methods: A national survey was conducted using Google Surveys from February to December 2024, targeting GI fellows across Saudi Arabia. The survey was developed based on literature review and validated through pilot testing among fellows during a regional board review course. Participation was encouraged through local representatives and gastroenterology meetings. Data were analyzed using descriptive statistics and thematic analysis. Results: Fifty-four fellows responded. The average satisfaction score was 3.96 of 5, with third-year fellows reporting the highest satisfaction. Strengths included strong endoscopic exposure and collaborative rotations. Mentorship scored lower at 2.78. Only about half of respondents reported opportunities for research, leadership training, or access to hands-on courses. Dedicated teaching sessions were available to just 40.7% of fellows, reflecting significant variability across centers. Conclusions: While GI training in Saudi Arabia has improved, notable disparities remain. Standardizing curricula, strengthening mentorship, expanding simulation-based training, and improving research and leadership opportunities are critical steps. Integrating successful international models could accelerate progress, ensuring that Saudi Arabia produces confident, competent gastroenterologists ready to meet both national and global healthcare demands.
Point-of-care ultrasound (POCUS) is a vital tool for diagnosis of life-threatening conditions, with broad consensus supporting its integration into medical curricula. Despite evidence of effectiveness of POCUS training, many clinicians … Point-of-care ultrasound (POCUS) is a vital tool for diagnosis of life-threatening conditions, with broad consensus supporting its integration into medical curricula. Despite evidence of effectiveness of POCUS training, many clinicians do not utilise skills in practice, resulting in missed patient benefits. Research on the barriers to POCUS utilisation remains limited. To address this, we conducted a theory-informed exploratory qualitative case study to investigate the utilisation of Focused Intensive Care Echo (FICE) in a specialist heart and lung hospital. The investigation was framed using situated learning and activity theory. We undertook 28 interviews, three focus groups (N = 27) and two expert discussions. Thematic analysis identified barriers while difference-within-similarity-analysis (Hofmann, 2020) uncovered how these interact to hinder POCUS-utilisation. We demonstrate how barriers preventing trainees from using POCUS interacted with the wider activity system, forming vicious cycles to further hinder use. These vicious cycles related to enthusiasm, opportunity, support, participation, communication and norms that hindered POCUS-use, and manifest as an underlying tension between competing priorities of POCUS training and patient care. We discuss how theoretically re-framing the findings suggests low/medium-resource mechanisms which helped mitigate this tension and overcome the vicious cycles. These facilitative mechanisms could generate scalable and sustainable solutions to support POCUS-training and utilisation.
Many low- and middle-income countries (LMICs) lack instruments to measure gaps in the public health competency of health professionals. The objective of this study is to develop a validated and … Many low- and middle-income countries (LMICs) lack instruments to measure gaps in the public health competency of health professionals. The objective of this study is to develop a validated and reliable Core Public Health Competency (COPHEC) index by assessing the knowledge, skills, abilities, and attitudes of senior and mid-level public health professionals with supervisory and management responsibilities in Uttar Pradesh (UP), India. Using the Core Competency framework that was developed in UP, we generated a draft COPHEC tool with 37 items, measured on a four-point Likert scale. We administered the tool to a total of 166 public health professionals that included two samples-84 senior and 82 mid-level public health professionals. To extract factors and assign factor scores to the instrument, we performed an exploratory factor analysis (EFA) using principal component analysis (PCA). Content and face validities were assessed by examining the steps used for the construction of the draft tool. Construct validity was measured by assessing the average factor loading of the items onto the component extracted from EFA. Internal consistency was used as a measure of reliability. The final COPHEC index had 37 items loaded on one factor in the sample. Content and face validities were assured through a participatory process with relevant stakeholders who identified the initial set of items as part of a Core Competency framework. Construct validity of the COPHEC scale was confirmed by the high average factor loading of components ranging from 0.58 to 0.81. The final index showed adequate reliability with Cronbach's alpha (α) = 0.97. The COPHEC index is a valid and reliable measure of core competencies in public health in UP. We recommend that governments adapt the index in LMICs to conduct assessments of health workers to identify training needs, evaluate the effectiveness of training programs through participants' competency acquisition pre- and post-training, and inform workforce development efforts in recruitment and performance management.
Despite the calls for adoption of competency-based medical education (CBME) in undergraduate medical education, many programs have only demonstrated partial implementation. Although most medical schools have developed competencies and benchmarks, … Despite the calls for adoption of competency-based medical education (CBME) in undergraduate medical education, many programs have only demonstrated partial implementation. Although most medical schools have developed competencies and benchmarks, many continue to rely on traditional and course-based assessment methods rather than programmatic assessment as called for with CBME. The transition of Washington University School of Medicine (WashU) to a comprehensive CBME program, focusing on the implementation of programmatic assessment, based on the principles of longitudinality, triangulation, and proportionality is described. Program design began in 2017. On graduating the first class in 2024, WashU successfully transitioned from a traditional 2-year preclinical, 2-year clinical instructional and assessment system to one aligning key principles of programmatic assessment and CBME. The new system uses competency committees to make formative and summative judgments of competency attainment to inform promotion and graduation recommendations and separates competency from promotion and graduation decisions between 2 committees to support regulatory requirements. Internal program evaluations showed high student satisfaction with assessment alignment (70.5%), formative feedback (90.6% quality, 91% quantity), assessment fairness (92.5%), midrotation feedback (94% quality, 91.3% quantity), and coaching (93.4%). Graduation questionnaires demonstrated students were confident that they were judged on abilities and not identity group (e.g., race or ethnicity, gender, other) (90% vs 68.4% nationally) and perceived the curriculum fostered and nurtured development as people and professionals (80.2% vs 73.5% nationally and 93.4% vs 92.8%, respectively). Next steps include continuing to have conversations with faculty and students about the goals of the new program, differences between competency-based and traditional assessment, and reminders about how to optimize performance review; continuing to increase partnership with coaches and career advisors; continuing to provide feedback on specific competencies; and exploring additional outcomes.
Despite institutional efforts to promote diversity and inclusion, medical education continues to marginalize students with disabilities through persistent structural, cultural, and procedural barriers. Inaccessible learning environments, inadequate accommodations, and entrenched … Despite institutional efforts to promote diversity and inclusion, medical education continues to marginalize students with disabilities through persistent structural, cultural, and procedural barriers. Inaccessible learning environments, inadequate accommodations, and entrenched ableist attitudes contribute to inequitable educational experiences and outcomes for disabled students. These barriers are further compounded for individuals who hold intersecting marginalized identities, particularly those who are racially and ethnically underrepresented in medicine. This commentary applies the disability justice framework-a praxis developed by disabled queer and trans activists of color-to critically examine the limitations of current inclusion efforts within academic medicine. By analyzing the framework's 10 guiding principles, the authors identify systemic gaps and propose concrete, equity-driven strategies for transforming medical education. Recommendations include integrating intersectionality into curricula, adopting universal design, revising technical standards, elevating the leadership of disabled individuals, and embedding structural accountability. Operationalizing disability justice enables medical institutions to move beyond performative inclusion, dismantle ableist norms, and foster educational environments in which all trainees-particularly those at the margins-can thrive.
Competency-based education is essential for training nurse practitioners (NPs). Although entrustable professional activities (EPAs) have been widely used to assess competency in health professionals, a valid EPA-based assessment scale is … Competency-based education is essential for training nurse practitioners (NPs). Although entrustable professional activities (EPAs) have been widely used to assess competency in health professionals, a valid EPA-based assessment scale is required to assess the clinical competencies of NPs in acute care settings. The aim of this study was to develop and examine the reliability and validity of an EPA-based assessment scale for NPs. A psychometric study with a cross-sectional survey was used in this study. The participants included NP instructors as evaluators and novice NPs currently in clinical practice as test takers. Convenience sampling was used to recruit participants from among members of the Taiwan Association of Nurse Practitioners. First, five EPA focus groups were used to develop five EPAs using a template and following the suggested steps. Second, a consensus validation was conducted using the Delphi study. Third, content validity was performed through a national study involving 218 novice NPs as test takers and 57 certified clinical NP educators serving as observers to test the EPAs. The Cronbach's alpha and intraclass correlation coefficient were calculated to examine EPA-based assessment scale reliability, and exploratory factor analysis, concurrent validity, and discriminant validity were applied to assess the validity of the EPAs. Finally, the EPA-based assessment scale of NP care for patients with fever was used in data analysis. The final version of the EPA-based assessment scale included a 22-item observable checklist scale designed to evaluate the clinically independent performance (1-5) of nine key NP competencies. The Cronbach's alpha coefficient for the overall scale was .95. The results revealed that the EPA-based assessment scale addressed two key factors of direct patient-centered care and communication/time management. Factor loadings for each item ranged from .58 to .83, accounting for 70.83% of the total variance in the EPA-based assessment scale. Concurrent validity indicated a high correlation between the developed EPA-based assessment scale and the Ottawa Clinic Assessment Tool (r = .96, p < .001). The results of the discriminant validity analysis indicated a statistically significant difference between novice and expert NPs (F = 7.84, p < .001). The novel EPA-based assessment scale developed in this study demonstrated satisfactory reliability and validity, thereby supporting its application in evaluating the clinical competencies of NPs.
Understanding the changing patterns of student engagement in undergraduate medical education is crucial for effective learning outcomes and overall academic success as well as for improving the quality of medical … Understanding the changing patterns of student engagement in undergraduate medical education is crucial for effective learning outcomes and overall academic success as well as for improving the quality of medical education. This study examines the dynamics of student engagement in 4 dimensions-behavioral, emotional, cognitive, and agentic engagement-within the context of undergraduate medical education in China. This longitudinal study uses data from the 2020 and 2021 China Medical Student Survey. The study comprises 4 cohorts spanning all 5 grades of undergraduate medical education in China. Student IDs were matched to track the same students across years, yielding a sample size of 67,439 from 94 medical schools. The multilevel growth curve model and Latent Markov Model were used for data analysis based on a cohort-sequential design. While emotional, cognitive, and agentic engagement show a similar tendency to decline with grade level until they rebound slightly between grades 4 and 5, behavioral clinical engagement increased significantly as students' progress through grade levels. High-, moderate-, and low-engagement statuses were defined using Bayesian Information Criterion, and the initial probabilities of medical students in grade 1 being in the low-, moderate-, and high-engagement status were 15.34%, 13.47%, and 71.19%, respectively. Transition matrices show that high-engagement students usually maintain their high-engagement status (range: 80.33%-99.92%), moderate-engagement individuals fluctuate though most remain in this status (range: 53.96%-81.81%) or move to the low-engagement status (range: 14.85%-45.42%), and low-engagement students demonstrate a limited propensity for transitioning to a high-engagement status (range: 0.18%-7.53%). This study demonstrates changing patterns of student engagement in the Chinese undergraduate medical education context. Identifying these patterns offers valuable insights for educators and policymakers to enhance student engagement.
The Plastic Surgery In-Service Examination (PSISE) objectively compares plastic and reconstructive surgery (PRS) residents' knowledge, while the ASPS' Education Network (EdNet) is a "gold standard," which programs use to design … The Plastic Surgery In-Service Examination (PSISE) objectively compares plastic and reconstructive surgery (PRS) residents' knowledge, while the ASPS' Education Network (EdNet) is a "gold standard," which programs use to design their curricula. This study aims to quantify the degree to which critical PRS learning modalities align. We also sought to understand the rationale behind how program leadership designed PRS residency core curricula. Questions from ASPS EdNet resident courses and 2018 to 2022 PSISEs were tabulated and assigned to EdNet topics. Then, curricula from 15 PRS residencies were assigned to EdNet topics. Topic alignment between curricula, PSISEs, and EdNet courses was tested with Pearson's χ2 and Fisher's exact tests. Program directors (PDs) and associate program directors (APDs) from PRS residencies were surveyed and/or interviewed regarding their PRS residency core curricula design. A total of 2038 questions corresponding to 102 topics were queried from EdNet. A total of 1170 PSISE questions and 910 curricula lectures were assigned to these topics. Program curricula taught 30 topics at significantly different frequencies than those taught by EdNet. The past 5 PSISEs tested 28 topics at significantly different frequencies than those taught by EdNet. One-third of gold-standard EdNet topics are taught and tested at significantly different frequencies. Comparison of these teaching tools shows that progress can be made to further align PRS educational modalities to improve resident learning and assessment.
Everyone talks about the end goal, becoming a doctor, but not enough is said about the torturous road that gets you there. Through the sleepless nights, plaguing imposter syndrome, friendships … Everyone talks about the end goal, becoming a doctor, but not enough is said about the torturous road that gets you there. Through the sleepless nights, plaguing imposter syndrome, friendships forged in lecture halls and hospital corridors, and the small moments that remind you why you chose this career in the first place. The journey through medical school is anything but linear; there are ups and many downs. It begins with wide-eyed enthusiasm, often mixed with a hefty dose of anxiety. You step into your first lecture hall expecting answers, and instead find more questions. How does anyone remember the Krebs cycle? What is the difference between all these types of shock? Why does every patient in the textbooks seem like a trick question? As the years pass, the theory builds and gives way to practice. You swap the library for the wards, and textbooks for actual patients. It is a very overwhelming and humbling transition as you realise that the real world does not always follow the textbooks. Through many bedside teaching sessions and countless CAPS sign-offs, you slowly build practical skills on top of the bank of human anatomy knowledge acquired through the first few years. Throughout all this, there is growth in both knowledge and character as you build both a strong professional and personal portfolio to help your career as a doctor flourish. Now this journey is not complete without its fair share of setbacks. Exam failures, burnout, self-doubt, or the universal experience of impostor syndrome. These challenges are something every medical student faces at various points in their journey. This all culminates in the final year, where you complete your final exam and the journey comes to an end, and the next step can truly begin. Overall, medical school is a journey that prepares you with the skills and knowledge required to be a doctor in both a practical sense and a more personal sense. Through the relationships built and challenges overcome, it is a stepping stone that better prepares you for the journey ahead in the healthcare environment.
Introduction: Modern healthcare demands continuing professional development (CPD) that bridges learning and practice and motivates health professionals. Current CPD activities often prioritize qualification, while socialization and subjectification purposes are considered … Introduction: Modern healthcare demands continuing professional development (CPD) that bridges learning and practice and motivates health professionals. Current CPD activities often prioritize qualification, while socialization and subjectification purposes are considered vital in other educational contexts and could help to foster dynamic learning. Group discussion of audit and feedback (A&amp;F) is a form of CPD that is used increasingly, and that may fulfill these needs. However, little is known about how this approach creates learning opportunities or what the group dynamic contributes. Method: In this video-stimulated interview study we explored how a group of experienced general practitioners perceive the value of group discussion of A&amp;F in their professional development, employing a constructivist paradigm. We first inductively analyzed our findings, using Thematic Analysis. A second deductive analysis followed, using Biesta s concept of the three purposes of education (qualification, socialization and subjectification) as framework. Results: According to our participants, group discussion of A&amp;F allows for a reflective process that is formed by group discussion. The group helps deepen reflection and assists with the forming of individual and collective opinions. The meetings enhanced motivation for both individual and collective learning. Key conditions included a safe learning environment and a high level of enjoyability. The group meetings offered opportunities for all three purposes of education. Discussion: Group discussion adds value to individual A&amp;F by offering room for socialization and subjectification, as well as classic qualification purposes. It thereby offers a future proof form of CPD that could improve quality of healthcare and stimulate lifelong learning.
Objectives: Competency-Based Medical Education (CBME) emphasizes acquiring specific competencies necessary for effective healthcare delivery. In community medicine, CBME focuses on preventive healthcare, public health policy, and addressing social determinants of … Objectives: Competency-Based Medical Education (CBME) emphasizes acquiring specific competencies necessary for effective healthcare delivery. In community medicine, CBME focuses on preventive healthcare, public health policy, and addressing social determinants of health. Within this framework, self-directed learning (SDL) plays a key role in fostering independent learning, critical thinking, and the application of evidence-based practices. This study aims to evaluate the effectiveness of an SDL module on National Health Programs for 3 rd- year MBBS students. Material and Methods: A pre-test and post-test design was used to assess the impact of an SDL module developed in the Department of Community Medicine, Sri Manakula Vinayagar Medical College and Hospital, Puducherry. The study included 158 3 rd -year MBBS students, who participated in SDL sessions on National Health Programs. A formative assessment was conducted using post-tests at the end of each module. IEC approval (EC/91/2021) was obtained. Data were analyzed using IBM Statistical Package for the Social Sciences Statistics (version 24.0), and a paired t -test was used to evaluate the statistical significance of the differences between pre-test and post-test scores. Results: The mean age of the participants was 18 ± 2.5 years. Among the students, 54.4% were female, and 45.6% were male. Significant improvements were observed in the post-test scores across all modules. The median pre-test score of the students was 6 (4–9), which increased to 8.75 (6.15–10) post-test ( P &lt; 0.05). The highest improvements were noted in modules such as the National Tuberculosis Elimination Program and Universal Immunization Program. Conclusion: The implementation of the SDL module in Community Medicine significantly improved students’ knowledge regarding National Health Programs. The findings underscore the effectiveness of SDL in enhancing CBME and highlight the importance of such modules in preparing students for real-world public health challenges. Further studies are recommended to explore long-term knowledge retention and the application of learned concepts in practical settings.
In health professions education, the hidden curriculum is a set of implicit rules and expectations about how clinicians act and what they value. In fields that are very homogenous, such … In health professions education, the hidden curriculum is a set of implicit rules and expectations about how clinicians act and what they value. In fields that are very homogenous, such as rehabilitation professions, these expectations may have outsized impacts on students from minoritized backgrounds. This qualitative study examined the hidden curriculum in rehabilitation graduate programs—speech-language pathology, occupational therapy, and physical therapy—through the perspectives and experiences of 21 students from minoritized backgrounds. Semi-structured interviews explored their experiences with their programs’ hidden curricula. These revealed expectations about ways of being, interacting, and relating. Three overarching themes emerged, each reflecting tensions between conflicting values: (i) blend in but stand out; (ii) success lies in individualism, while de-prioritizing the individual; and (iii) fix the field, using your identities as a tool. When the expectations aligned with students’ expectations for themselves, meeting them was a source of pride. However, when the social expectations clashed with their own culture, dis/ability, gender, or neurotype, these tensions became an additional cognitive burden, and they rarely received mentorship for navigating it. Health professions programs might benefit from fostering students’ critical reflection on their hidden curricula and their fields’ cultural norms to foster greater belonging, agency, and identity retention.
Vikram Kumar | Pediatric Infectious Disease
Background Medical students are repeatedly exposed to challenging situations while working with healthcare teams, so acquiring conflict management skills is necessary. This study aimed to investigate the effect of an … Background Medical students are repeatedly exposed to challenging situations while working with healthcare teams, so acquiring conflict management skills is necessary. This study aimed to investigate the effect of an educational intervention on the conflict management skills of medical students using self- and observer-assessment. Methods This educational intervention with a pre-and post-test design was conducted in 2022–2023. Second-year medical students of Tehran University of Medical Sciences volunteered to participate in a randomized study with a control group. The participants were divided into two intervention (12 groups of 4 each, n = 48) and control groups (12 groups of 4 each, n = 48). The intervention group was educated based on the Fogg model, and the control group was trained using conventional method. Student conflict management skills were evaluated using a self-assessment checklist and observer-assessment. Results The findings of observer-assessment revealed that the post-test rating in the intervention group was significantly higher than the control group, while the pre-test score in the two groups did not indicate a significant difference (P = 0.03; ES = 0.44 and P = 0.30; ES = 0.18, respectively). Moreover, the comparison between pre-test and post-test in the two intervention and control groups also showed that the educational intervention significantly increased the mean score of the post-test in both the intervention and control groups (P ≤ 0.001; ES = 0.97 and P ≤ 0.001; ES = 1.34, respectively). The comparison between pre-test and post-test in the two intervention and control groups via self-assessment showed that the skill score increased only in the intervention group (P = 0.02; ES = 0.48 and P = 0.98; ES = 0.004, respectively). Conclusion This study found that using the Fogg model in e-learning platforms enhances medical students’ conflict management skills, highlighting the effectiveness of well-designed, creative, and active model-based teaching methods.
Background: In Thailand, Occupation-based practice (OBP) has been emphasized in the curriculum of outcome-based education for Chiang Mai occupational therapy students towards the revised bachelor curriculum year 2021; however, it … Background: In Thailand, Occupation-based practice (OBP) has been emphasized in the curriculum of outcome-based education for Chiang Mai occupational therapy students towards the revised bachelor curriculum year 2021; however, it was essential in how they comprehensively understand and skills implementation in recent years. Objective: The aim of this study was to explore an understanding of OBP among Thai occupational therapy students towards their clinical fieldwork and classroom experience. Materials and methods: This study used a convergent mixed method by collecting data with a forty-item developed self-assessment questionnaire of the clinical fieldwork experience from third- and fourth-year students and employing focus group interviews with first- to fourth-year students between September and October 2022. Descriptive statistics were analyzed for seventy-five return questionnaires. Thirty-eight participants participated in a total of nine focus group interviews, and the qualitative data were analyzed by content and thematic analyses. Both sets of data were merged. An interpretation with six levels (remembering, understanding, applying, analyzing, evaluating, and creating) of cognitive domains of Bloom’s revised taxonomy was used. Results: The understanding of OBP in the clinical fieldwork experience years has ≥66.7% for all items within high self-assessment of understanding OBP in the six levels: 90.67%, 81.33%, 86.4%, 79.93%, 85.33%, and 83.47%, respectively. Two main themes, firstly, Occupation as the central focus of practice, and secondly, Importance of theoretical knowledge and experience, are presented. Analyzing, evaluating, and creating levels of a high cognitive domain were revealed in fourth-year students, while remembering, understanding, and applying levels were basic cognitive domains that support the students’ understanding of OBP. Conclusion: The results indicate an average comprehension of OBP in a high percentage, over 75% for overall levels with the supportive two themes, which is further useful in improving outcome-based learning and teaching of the occupational therapy curriculum.
Розвиток медичної освіти зазнав значного впливу технологічних досягнень та штучного інтелекту (ШІ), що трансформують процес підготовки медичних працівників та підвищують якість освіти. Україна досягла значного прогресу в інтеграції технологій та … Розвиток медичної освіти зазнав значного впливу технологічних досягнень та штучного інтелекту (ШІ), що трансформують процес підготовки медичних працівників та підвищують якість освіти. Україна досягла значного прогресу в інтеграції технологій та ШІ в систему медичної освіти. Однак існують певні виклики, зокрема обмежена інфраструктура в деяких регіонах, фінансові обмеження та потреба в розширенні доступу до передових технологій. Проте активна участь в міжнародних мережах медичної освіти та державна підтримка цифрової освіти відкривають перспективи для розширення доступу до цих інновацій. Інтеграція технологій та ШІ в медичну програму навчання швидко змінює способи доступу студентів до навчальних ресурсів, розвитку практичних навичок та використання передових діагностичних і лікувальних інструментів. Це готує майбутніх медичних працівників до ефективної роботи в технологічно орієнтованому медичному середовищі. Продовжуючи інвестувати в ці технології, система медичної освіти має потенціал для підвищення своєї складності та якості. ШІ також сприяє персоналізації освіти для студентів, пропонуючи індивідуальні навчальні шляхи на основі їхнього прогресу, результатів та областей, що потребують вдосконалення. Це особливо корисно в великих групах, де індивідуальна увага може бути обмеженою. Інтеграція ШІ та цифрових технологій у медичну навчальну програму готує майбутніх медичних працівників до більш ефективної та результативної роботи в технологічно розвинених медичних умовах. Незважаючи на труднощі, пов’язані з війною та обмеженими ресурсами, Україна робить значні кроки в освоєнні сучасних методів навчання та технологій. Поточні реформи та міжнародні партнерства, ймовірно, ще більше зміцнять систему медичної освіти, забезпечуючи підготовку майбутніх медичних працівників до швидкоплинних потреб у системі охорони здоров’я.

Empowering

2025-06-19
| IGI Global eBooks
Dr. Bobby Anderson is a nurse practitioner at Mayo Clinic in the Department of Pulmonary and Critical Care in Rochester, Minnesota. Excerpts from the author's 2021 interview are shared in … Dr. Bobby Anderson is a nurse practitioner at Mayo Clinic in the Department of Pulmonary and Critical Care in Rochester, Minnesota. Excerpts from the author's 2021 interview are shared in which Bobby discusses challenging experiences in his undergraduate clinical work that inform his choice to empower his own clinical students' critical thinking and application of knowledge. He positions mistakes his students may make as opportunities for learning rather than moments meant to penalize or shame. Bobby also describes how his teaching and feedback approaches are critical to ensuring that his clinical students receive opportunities to apply the content they learned in their campus coursework to clinical contexts with actual patients.
Continuity of education (CoE) is a growing area of interest in health professions education, both for its impacts on learning (continuity of curriculum and continuity of supervision; CoS) and for … Continuity of education (CoE) is a growing area of interest in health professions education, both for its impacts on learning (continuity of curriculum and continuity of supervision; CoS) and for its influence on patient care (continuity of patient care; CoPC). The COVID-19 pandemic offered an opportunity to examine discontinuity of education and the potential impacts of interruption to CoE, a knowledge gap in medical education research. We conducted 14 semi-structured qualitative interviews involving participants from a Canadian family medicine programme. We recorded and transcribed interviews conducted on Zoom that were then analysed iteratively using reflexive thematic analysis to identify major themes. We identified three themes. Theme 1: Changed relationships: an alteration due to mitigation strategies. Theme 2: Preparedness for practice: a decrease despite mitigation strategies. Theme 3: Adaptivity in the face of change: a consequence of mitigation strategies. This study suggests that there are three main implications resulting from the impacts of disruption to CoE. Faculty development and curricular design are needed to support interrupted relationships, including finding ways to help faculty and residents nurture changed relationships. Physicians in their first 5 years of practice who have experienced disruption in their training may benefit from additional support to address the negative impact on their sense of preparedness for practice. Finally, the positives learned from this study can be used to face future disruptions to CoE.
Work-integrated learning (WIL) is a key component of many professional programs, allowing students to apply classroom knowledge in workplace and practice settings. While most pharmacy schools include clinical rotations in … Work-integrated learning (WIL) is a key component of many professional programs, allowing students to apply classroom knowledge in workplace and practice settings. While most pharmacy schools include clinical rotations in their curriculum, few integrate co-operative education ("co-op"), resulting in a dearth of literature regarding how each WIL model prepares students for pharmacy careers. We analyzed student performance evaluations to identify co-op supervisors' and rotation preceptors' perceptions of students' job/practice readiness skills, distinguishing the unique and complementary skills developed by each experience. In the University of Waterloo Doctor of Pharmacy (PharmD) program, students complete three co-op work terms in their second and third years and three clinical rotations in their fourth year. Supervisor and preceptor qualitative feedback on student performance for students in three graduating classes was qualitatively analyzed; two researchers independently coded data, using content analysis to identify themes. Both WIL models support students' growth in confidence, ability to engage in tailored communication with patients, and improved collaboration with other healthcare providers. A hierarchy of learning was observed with co-op helping students gain experience as a contributing member of an interprofessional team and learning how to adapt to workflow changes. This provided a foundation for final-year rotations allowing students to focus and gain self-assurance providing patient care services. Supervisors and preceptors perceive that co-op and rotations provide students with multiple important skills for job/practice readiness. Co-op's fostering of job readiness skills prepares students for more advanced, focused, and nuanced practice skill development in the program's final year.
Introduction The development of health professions education (HPE) as an academic discipline requires well-qualified educational researchers, equipped with the competence to advance the field. There is, therefore, a need to … Introduction The development of health professions education (HPE) as an academic discipline requires well-qualified educational researchers, equipped with the competence to advance the field. There is, therefore, a need to establish and support pathways in which early-career researchers (ECRs) can develop the necessary competence to pursue a career in this field. Approach A group of 19 international experts in HPE from various professions, conducted a 2.5-day Scoping Workshop in Hannover, Germany, in November 2024. The main output of the workshop is a joint position statement on the support of ECRs in HPE, using appreciative inquiry and collaborative writing. Position The Scoping Workshop led to a dynamic and productive exchange of ideas and experiences resulting in a common vision and five positions: (1) identify, establish, and recognize distinct career paths, (2) develop and implement a robust funding strategy, (3) create a nurturing and diverse intellectual culture, (4) connect research to practice and address real-world problems, (5) invest in leadership, advocacy, and coaching. There was strong agreement that these areas were not well developed and required urgent attention. Outlook There is a need to foster interprofessional and interdisciplinary collaboration and provision of sustainable support structures so that ECRs can advance HPE. Only when these areas are addressed can these educational researchers contribute to the development of effective learning which prepares the healthcare workforce to meet today’s challenges. Researchers, educators, decision-makers and stakeholders in academia, education, and health and social care contexts share a responsibility for shaping the way forward.