AISLE’s AI analyzer found 38 CVEs in OpenEMR (100,000+ providers, 200M+ patients) in Q1 2026, including two CVSS 10.0 SQL injections and a FHIR compartment bypass.
Key Takeaways
Two CVSS 10.0 SQL injections: the Patient REST API _sort parameter and the Immunization module both concatenated user input directly into queries, enabling UNION SELECT, blind injection via SLEEP(), and RCE if the DB user held FILE privileges.
FHIR CareTeam endpoint leaked all-patient data regardless of patient-scoped OAuth2 token because FhirCareTeamService never declared IPatientCompartmentResourceService; the filter code existed but was never invoked.
Largest category is IDOR and missing ACL checks (24+ CVEs); portal-to-clinical XSS crossed the patient-to-clinician trust boundary, enabling session hijack of admin or provider accounts.
38 CVEs in one quarter vs. 23 from the 2018 Project Insecurity human audit; bulk of fixes shipped in OpenEMR 8.0.0 roughly four weeks after first disclosure in January 2026.
AISLE PRO is now integrated into OpenEMR’s code review pipeline to catch new vulnerabilities at commit time, not post-release.
Hacker News Comment Review
Consensus: all 38 fall into OWASP basics (SQL injection, XSS, IDOR, path traversal), which commenters read as evidence AI scanners reliably clear a low bar that under-resourced maintainers consistently miss, not that the bar was high.
Strong skepticism about target selection: OpenEMR has been documented as a security disaster since at least 2010, and at least one ex-core maintainer publicly called it irredeemable; commenters argue AISLE chose a soft target for marketing narrative rather than a hard one.
The most unsettling thread: adversaries now have symmetric access to the same AI scanning capability, and closed-source medical software is probably equally vulnerable but unauditable, so disclosure advantages may erode faster than remediation cycles.
Notable Comments
@david_shaw: found similar vulnerabilities in OpenEMR in 2010 as a security engineer; argues the AISLE engagement demonstrates AI capability on a notoriously weak target, not on representative production software.
@zuzululu: frames AI-assisted CVE discovery as a threat-parity shift: “adversaries now have access to off the book inference” to scan widely-used OSS at scale before defenders do.
@muglug: built an OSS PHP SAST tool six years ago that could have caught many of these; argues the real gap is industry neglect of pre-incident security tooling, not tool capability.