We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
For portfolios to be effective in supporting and assessing competence development, robust integration into the curriculum and tutor support are essential. Further studies should focus on the effectiveness and user-friendliness of portfolios, the merits of holistic assessment procedures, and the competences of an effective portfolio mentor.
Aim Portfolios are often used as an instrument with which to stimulate students to reflect on their experiences. Research has shown that working with portfolios does not automatically stimulate reflection. In this study we addressed the question: What are the conditions for successful reflective use of portfolios in undergraduate medical education?
Methodology/research design We designed a portfolio that was aimed at stimulating reflection in early undergraduate medical education, using experiences described in the medical education literature and elsewhere. Conditions for reflective portfolio use were identified through interviews with 13 teachers (mentors), who were experienced in mentoring students in the process of developing their portfolios. The interviews were analysed according to the principles of grounded theory.
Results The conditions for successful reflective use of portfolios that emerged from the interviews fell into 4 categories: coaching; portfolio structure and guidelines; relevant experiences and materials, and summative assessment. According to the mentors, working with a portfolio designed to meet these conditions will stimulate students' reflective abilities.
Conclusion This study shows that portfolios are a potentially valuable method of assessing and developing students' reflective skills in undergraduate medical training, provided certain conditions for effective portfolios are recognised and met. Portfolios have a strong potential for enhancing learning and assessment but they are very vulnerable and may easily lead to disappointment. Before implementing portfolios in education, one should first consider whether the necessary conditions can be fulfilled, including an appropriate portfolio structure, an appropriate assessment procedure, the provision of enough new experiences and materials, and sufficient teacher capacity for adequate coaching and assessment.
Learners make complex judgements regarding the credibility of information about clinical performance. Credibility judgements influence the learning that arises from the clinical experience. Further understanding of how such judgements are made could guide educators in providing credible information to learners.
Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.
The active participation of learners in their own learning is possible when learning is supported by programmatic assessment. Certain features of the comprehensive programme of assessment were found to influence student learning, and this influence can either support or inhibit students' learning responses.
Although cultural factors can pose a challenge to the application of PBL in non-Western settings, it appears that PBL can be applied in different cultural contexts. However, its globalisation does not postulate uniform processes and outcomes, and culturally sensitive alternatives might be developed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.