Times are changing, and the amount of information coming at us from all directions can easily be overwhelming. This information—whether true or false—is unrelenting and has increased in magnitude over the past five years. Part of it may be the natural progression of one’s career and the expansion of one’s network, but most of it is just the sheer volume that is at our fingertips.
Research and patient experience
As I reflect on my everyday practice of seeing patients, teaching, and learning, I’ve come to realize that I need to make two things my priority: Keeping up to date on the latest research, and putting myself in my patients’ shoes.
What we learned in optometry school is great; however, being efficient and effective in practice is going to pay much higher dividends. But without backing up how we diagnose and treat patients with research and evidence, we put ourselves at risk. Patient-reported outcomes (PROs) make what we do impactful because they are what the patient wants and experiences in the real world.
Each patient sits down in our chairs not having been enrolled into a study with inclusion and exclusion criteria. They come as they are—from one side of the train tracks, maybe not having eaten lunch that day, or worrying about her sick child. Yes, we stand on the research evidence, but it’s with compassion that we customize the care we deliver to every patient. From this place, it allows our care to actually be creative, rewarding, and fun.
Last summer, I had the opportunity to attend the Evidence Based Clinical Practice (EBCP) program at McMaster University near Toronto. Many recognize this as the birthplace of EBCP and have incorporated its teaching styles, such as team-based learning and using intentional pauses during lectures. There are valid concerns regarding if this method is actually feasible and whether it can translate easily into everyday practice. My opinion is a resounding yes!
There are systematic approaches in evaluating risk of bias. I believe we are intuitively aware of things such as measurement bias, investigator bias (company sponsorship?), recall bias, and just overall weakness in subject recruitment and study design. Are researchers’ results clinically significant? Or are the results statistically significant only because they enrolled thousands of subjects?
A new tool I learned was the idea of a funnel plot and how it helps assess the risk of publication bias. Academic institutions—whether or not they collaborate with companies—may not always publish negative or non-results. Without being in the know, clinicians may never catch wind of this. When we pull the highest Level I evidence of randomized controlled trials (RCTs) and systematic reviews of those RCTs, we can become suspicious when funnel plots do not show an equal distribution around the effect size.
Case in point is a Cochrane review of prophylactic non-steroidal anti-inflammatory drugs (NSAIDs) for the prevention of macular edema after cataract surgery.1 Some 28 of the 34 relevant studies selected by the authors looked at a variety of topical NSAIDs combined with steroids compared to steroids alone. World leaders in cataract surgery, as well as standard procedures of care outlined in textbooks, recommend that NSAIDs should be used more times than not even in simple cataract surgery because the sequelae of dealing with macular edema is unfavorable for both doctor and patient.
The conclusion of the Cochrane review stated that there was a low-certainty evidence of this benefit; however, what was eye opening was the funnel plot. I trust the conclusion of this systematic review because of the effect size of a 60 percent reduced relative risk (0.40) and tight confidence interval (0.32 to 0.49). And yet the funnel plot suggests that equivocal results were left unpublished—those with smaller relative risk or even increased relative risk. Points are missing to the right of the blue hash line that raises suspicion of publication bias. Now the question must be asked: Can NSAIDs help but also cause harm?
While publication bias is a form of selection bias, I was made acutely aware of its downstream ramifications in a TED talk by Ben Goldacre, MD: “What Doctors Don’t Know About the Drugs They Prescribe” (https://buff.ly/2wOR6Pl). Dr. Goldacre specifically breaks down how FDA trials for 12 antidepressants had 38 positive results and 36 negative results, but only 3 of the negative results were published. This completely skews the perception of the effectiveness of these antidepressants when the results of 33 studies are left hidden. At its worst, publication bias can cause harm to patients. This form of bias has garnered so much attention that a systematic review exists specifically looking at publication bias. I suggest that we should be skeptical of health news headlines, especially when things sound too good to be true.
The Cochrane database is gold standard for systematic reviews. Thankfully, most of these publications are made open access, which means you don’t need a subscription or library to read the full articles. From the Cochrane Library page (https://buff.ly/2hukjoe), click “Eyes & Vision” (https://buff.ly/2hrmagS) to see what treasures you may uncover there. Two recent articles that may interest the everyday clinician are titled, “Neuroprotection for treatment of glaucoma in adults” and “Non-surgical interventions for acute internal hordeolum.”
Putting it into practice
While value-based models bring with them negative connotations, we can leverage specificity and sensitivity literature to screen better and work smarter, not harder.
New equipment is expensive. But coupled with a revamp of workflow, it can be empowering to both you and your staff. Technicians may begin to feel that they are a part of the healthcare system as more responsibility is delegated to them. Situations may arise in which technicians may suspect a condition when listening to a chief complaint, then decide on their own to run one more diagnostic test for the doctor. When found to be correct, everyone wins.
One simple example would be identifying a 28-year-old myopic male with intraocular pressures (IOP) of 24 mm Hg, then running an optical coherence tomography (OCT) test for both pachymetry and “baseline” retinal nerve fiber layer thickness, circling the clock hour that is suspicious when concerned about pigment dispersion glaucoma.
In November 2016, I had the honor of welcoming visitors from around the United States and the world to our new healthcare facility, Ketchum Health. While talking in the atrium with educators from the State University of New York (SUNY) College of Optometry, we arrived at an interesting topic of school vision screenings.
One educator mentioned how it made more sense to perform corneal topography rather than IOP at school vision screenings. I couldn’t have agreed more, particularly if referencing epidemiology studies.
The prevalence of pediatric glaucoma is 2.85 per 100,000 births or 2.29 per 100,000 births—let’s say 0.0025 percent.2,3 By contrast, keratoconus also affects the pediatric population, but it’s more likely in teenage years at a prevalence of 1 in 2000 or a frighteningly high 1 in 375 in some parts of the world. Few population-based studies exist, but the one published by Hofstetter back in 1959 looking at the population of Indianapolis found 120 per 100,000 (0.12 percent), or 48 times higher than pediatric glaucoma. Amblyopia, convergence insufficiency, anisometropia, and strabismus also come in at even higher prevalence rates.
Using the evidence
Here is an example of how I used evidence from what my colleagues and I learned in a simple survey. The background is the impact of computer use on the tear film and dry eye symptoms.
When my father became presbyopic, near variable focus lenses (NVFL) or computer progressive additional lenses (PALs) became a necessity for his productivity. As the years went by, he and his engineers began working on two computer monitors that sat a bit further back than the traditional single monitor. Widescreens also became the norm. This lead me back to our survey.
We found that close to 8 percent of working adults use three or more computer monitors (n=220). This has changed my case history to ask patients how many computer monitors they use, what type, and how many hours of meetings they have per week. By knowing these numbers, we can prescribe better spectacle lens solutions that should include a customized anti-fatigue or NVFL coupled with a blue-blocking filter.
Related: How to create a happy patient
In the future, my wish list includes more natural history and prognosis studies because patients and colleagues are asking for them. If we as optometrists have done a great job, most patients can understand risk factors and pathophysiology if you show them anatomy and physiology via Google images and YouTube or Vimeo videos.
What happens in the long term will then remain somewhat of a mystery. It’s been said that it takes 10 to 15 years to get evidence into practice. My hope is that we all continue to be active learners and readers and partake in evidence-based practice that includes:
• Best research evidence
• Clinical expertise
• Patient values and preferences.
Our patients deserve the latest and greatest as soon as the evidence is fairly strong. My request is to not even settle for being three years behind.
1. Lim BX, Lim CH, Lim DK, Evans JR, Bunce C, Wormald R. Prophylactic non-steroidal anti-inflammatory drugs for the prevention of macular oedema after cataract surgery. Cochrane Database Syst Rev. 2016 Nov 1;11:CD006683.
2. Taylor RH, Ainsworth JR, Evans AR, Levin AV. The epidemiology of pediatric glaucoma: the Toronto experience. J AAPOS. 1999 Oct;3(5):308-15.
3. Aponte EP, Diehl N, Mohney BG. Incidence and clinical characteristics of childhood glaucoma: a population-based study. Arch Ophthalmol. 2010 Apr;128(4):478-82.