The research, published in JAMA Health Forum, examined 950 AIMDs approved by the US Food and Drug Administration as of November 2024.
Although over half (53.2%) of AIMDs on the market originating from publicly traded companies, they were responsible for over 90% of recall events, as demonstrated in the study. Image credit: AdobeStock/ipopba
A study conducted by researchers from the Johns Hopkins Carey Business School, the Johns Hopkins Bloomberg School of Public Health, and Yale School of Medicine has suggested an association between publicly traded companies selling artificial intelligence (AI)-enabled medical devices (AIMDs) and the recalls of those products. Researchers chalk up the study finding with a lack of real-human testing of the technology before it goes to the market, and could reflect pressures within the industry to launch products faster.1,2
The research, published in JAMA Health Forum, examined 950 AIMDs approved by the US Food and Drug Administration as of November 2024.2
A notable portion of these FDA-cleared AIMDs were recalled shortly after clearance, with 43.4% of AIMD recalls occurring within the first 12 months of device clearance. This is approximately double the rate reported for all 510(k) devices, suggesting the process “may overlook early performance failures of AI technologies,” according to a news release.1
“We were stunned to find that nearly half of all AI device recalls happened in the very first year after approval,” said Tinglong Dai, corresponding author and Bernard T. Ferrari Professor of Business at the Johns Hopkins Carey Business School, in the release. “And the even bigger surprise? The recalls were heavily concentrated among tools with no reported clinical validation whatsoever. We just thought, ‘wow — if AI hasn’t been tested on people, then people become the test.’”
Although over half (53.2%) of AIMDs on the market originating from publicly traded companies, they were responsible for over 90% of recall events, as demonstrated in the study. Thus, public company status was associated with a 5.9 times higher chance of a recall event.1,2
“In this cross-sectional study, recalls of FDA-cleared AIMDs were uncommon but were concentrated early after clearance and predominantly involved products lacking clinical validation and manufactured by publicly traded companies, suggesting that the 510(k) process may overlook early performance failures of AI technologies,” the study authors, led by Branden Lee, BS, affiliated with Department of Orthopedic Surgery, Johns Hopkins Medicine in Baltimore, Maryland; the School of Medicine, Johns Hopkins University in Baltimore, Maryland; and the Bloomberg School of Public Health, Johns Hopkins University in Baltimore, Maryland, stated. “Requiring prospective evaluation or issuing time-limited clearances that lapse without confirmatory data may reduce these risks. Given that over 90% of recalled units were produced by public companies, heightened premarket clinical testing requirements and postmarket surveillance measures may improve identification and reduction of device errors, similar to risk-based strategies in pharmacovigilance. The association between public company status and higher recalls may reflect investor-driven pressure for faster launches, warranting further study.”
Study authors also noted that 510(k) clearance of a device does not require prospective human testing, meaning that many AIMDs enter the market with limited to no clinical evaluation. Of the recalled devices from privately traded companies, 40% lacked clinical validation, while a higher proportion of recalled devices from established public companies (77.7%) and smaller public companies (96.9%) lacked clinical validation.1,2
“The lopsided nature of these recalls should give every advocate for medical AI pause. Publicly traded companies, the big fish in this still-small pond, built just over half the devices but were responsible for nearly all the recalled units,” said Dai in the release.
Given the study’s results, study authors called for requiring human testing or clinical trials before a device is cleared for market or incentivizing companies to conduct ongoing clinical studies, in addition to collecting real world performance data post-market.2
“If these tools are going to scale, we’ve got to test them on real people,” said Dai in the release. “Either require proper human studies up front or give approvals that expire unless the evidence shows they actually work.”
Want more insights like this? Subscribe to Optometry Times and get clinical pearls and practice tips delivered straight to your inbox.