‘Digitally native’ learners will change medical education
How much a resident knows is no longer the most accurate way to assess the person in an era of students who grew up getting their information in a digital world.
I recently had an interesting conversation with several co-residents about how our health care system should evaluate physician performance. If nothing else, the discussion highlighted how challenging this issue has been for almost all medical specialties, including internal medicine, where the controversy has been punctuated by debates about Maintenance of Certification (MOC) and licensure. It remains to be seen what will develop after the American Board of Internal Medicine recently changed its MOC programs in the face of intense criticism. What is clear, however, is that organizations, including ACP, must remain engaged to guide physicians through any future changes.
Part of the discontentment appears tied to the sentiment that MOC or other similar activities may not accurately affect individual practice realities. Among other things, case-mix and regional trends make certain topics crucial for some practitioners and less pertinent for others. Many feel that competence is multifaceted and not fully represented by MOC requirements. At the heart of these divergent experiences and perspectives, however, lies a common question: What should physicians be responsible for knowing?
For many decades, good physicians have been viewed, both within and outside the medical community, as those with the most encyclopedic knowledge. In academic centers, we often praise doctors who can offer historical perspectives, molecular explanations, and/or evidence-based literature related to a condition or disease. Older physicians frequently offer tales of internalizing vast amounts of information without outside help. (“We didn't have iPhones or the Internet in those days.”) Medical student evaluations are frequently based on fund of knowledge, what doctors can recall or memorize in standardized formats. To the general public, the long, grueling nature of medical training is predicated largely on the fact that there is so much to memorize and retain.
To be fair, this is not without reason. It's impossible to provide high-quality care without a solid body of clinical knowledge and understanding of pathophysiology. Much should be said and praised about clinicians with command of the clarity and uncertainty in guidelines or standards of care. There is indeed a great deal of information that goes into good doctoring.
Nonetheless, fund of knowledge cannot be the sole (or even main) proxy for physician quality. There are many reasons for this, but perhaps none more important than the fact that it will fail to capture how a new generation of doctors learn, while running contrary to the very spirit of continuing medical education.
The vast majority of medical learners these days are digitally native, meaning that they enter training having grown up on a steady diet of technology. They study and take exams using electronic interfaces. They engage much of the world around them, their friends, professional communities, and current events through the Internet and mobile media.
They are also facile at obtaining information in a thoroughly digital world. Students and residents are often far more adept at navigating electronic medical records than full-time practitioners. Many can obtain clinical information about drug doses, treatment algorithms, complication rates, and synopses of rare conditions in a fraction of the time it takes their attendings, many of whom trained in a time when information was available only in libraries or textbooks tucked under their arms. Digital “knowledge clouds” are emerging in many learning settings, putting an immense amount of information at a physician's fingertips.
This is where a digital learning style will possess important implications for the future of physician evaluation. No longer can doctors be asked to “know it all,” an expectation that has been demystified by the impossibly large and expanding body of medical knowledge. Even within specific specialties, it is difficult to standardize requirements when individuals go down divergent career pathways. Internists, for example, often pursue work in academic research, quality improvement, clinical education, full-time practice, management and leadership, public health, or nonprofit work. Beyond this, competency goals will likely remain moving targets amid dynamic policy and payment pressures.
Under these conditions, “high-quality” physicians will not simply consist of encyclopedic, master clinicians or doctors who can internalize all standardized materials. Beyond fund of knowledge, quality must be understood through a clinician's ability to solve problems and improve on his or her own deficiencies.
One promising way to do this is through an approach termed the “triple jump.” In this framework, physicians are subjected to 3 stages of evaluations. First, they make a “first pass” through a question, generating an initial score that reflects their current fund of knowledge. Next, they are given a period of independent research, time they can use to look up information and supplement their knowledge (their ability to do this second step well is measured through a “process score”). Finally, they are given a chance to apply this new information to the original question, generating a final “assisted” score. All 3 scores and the differences between them contribute to overall evaluation.
This kind of staged, assisted approach can better embody the spirit of what evaluation should be: the measurement of an individual's experience as a “lifetime learner.” It also reflects 2 realities that are apparent to most digitally native trainees: that a significant proportion of what is needed for clinical care can be looked up at the point of care and that being a good doctor does not mean knowing it all.
As digital natives finish training and enter full-time practice, they will make staged approaches like the “triple jump” much more promising as solutions. In truth, modern learners are already learning and working under similar conditions, scouring the Internet, navigating apps, and pulling in data from a diverse collection of point-of-care sources to care for patients and amend gaps in their own knowledge.
Going forward, physician evaluation should test and utilize this innate learning style. Doing this will change more than the language and program requirements around MOC. It will also help produce significant gains in how we fundamentally understand our own competence and communicate that to patients and peers.