Executive Hook: The heart‑attack risk hiding on routine chest CTs
Every day, hospitals generate a goldmine of risk signals they never use. Millions of routine chest CTs-ordered for trauma, pneumonia, or lung cancer screening-quietly capture coronary artery calcium (CAC), a powerful marker of future heart attack risk. Startups like Bunkerhill Health, Nanox.AI, and HeartLung Technologies are now using AI to quantify CAC automatically, flag high-risk patients, and nudge timely prevention. The idea is simple: find risk where you’re already looking. The execution is not. Done well, this can move the needle on cardiovascular events. Done poorly, it creates noise, clogs clinics, and inflates costs without improving outcomes.
Industry Context: From incidental finding to population-level prevention
Cardiovascular disease remains the costliest-and most preventable—killer. Traditional guidelines and risk scores are right only about half the time, which leaves too many high-risk patients unrecognized and unprotected. Meanwhile, CAC is often visible on existing CTs but goes unreported when radiologists are focused on the primary indication. AI flips this from happenstance to system: consistent detection, structured scoring, and automated routing to the right next step.
Importantly, AI’s signal is getting stronger across cardiology. As one research trend demonstrates, “AI models have demonstrated significantly higher accuracy in predicting sudden cardiac arrest and recurrent heart attacks,” with a Johns Hopkins model reporting 89% accuracy overall and 93% in high‑risk age groups. While that’s a different task than CAC scoring, it reinforces a broader point: machine learning is outperforming static heuristics on clinically meaningful endpoints.

The question for hospital leaders isn’t whether algorithms can see calcium. They can. It’s how to turn those pixels into fewer heart attacks—safely, equitably, and economically—without overwhelming cardiology clinics or creating perverse incentives.
Core Insight: Treat CAC-AI as a service line, not a radiology widget
The winning implementations I’ve seen treat AI-derived CAC like a cross-functional prevention program. Radiology detects; cardiology and primary care act; population health tracks; finance measures value. The technical lift—deploying an FDA-cleared model into PACS, VNA, or cloud—matters. But business value comes from the workflow around it: patient notification, clinician action, closed-loop follow-up, and outcome measurement. “Predictive AI lets care teams intervene before symptoms emerge, potentially preventing heart attacks and reducing hospitalizations.” That promise is realized only when the hospital owns the end-to-end workflow.

Vendors are maturing fast. Beyond the CAC startups, companies like Cleerly Health offer plaque characterization from coronary CT angiography; academic groups at Radboud, Yale, and Columbia University Irving Medical Center are publishing independent validations. Your job is to separate “model accuracy” from “program impact.” Accurate scores that don’t change therapy—or that trigger low-value cascades—won’t pay off.
Common Misconceptions: What most organizations get wrong
- “If it’s accurate, outcomes will improve.” Accuracy is table stakes. Without patient engagement, statin initiation, and follow-up, you’ll detect risk without reducing events. A Danish population study of CAC screening showed no mortality benefit from broad, untargeted testing—AI doesn’t fix misaligned strategy.
- “This is a radiology problem.” It’s a service-line design problem. The biggest failure points are in primary care handoffs, benefit discussions, and therapy adherence.
- “Universal screening is the goal.” It isn’t. Focus where pretest probability and actionability are highest: lung CT programs, ED chest CTs in 45+ patients, and populations with low preventive care engagement.
- “Reimbursement will cover it.” Today, AI-derived CAC on non-gated CTs is rarely reimbursed as a separate service. The business case must rest on downstream value: appropriate statin starts, LDL reduction, fewer MIs and admissions, and quality metric gains.
- “Deploy and forget.” Models drift; clinical guidelines evolve. Without governance, monitoring, and periodic recalibration, performance degrades.
- “One vendor to rule them all.” Over-indexing on a single black-box platform risks lock-in. Demand open standards (DICOM SR, FHIR), data portability, and exit provisions.
Strategic Framework: A phased roadmap from pixels to prevention
- 1) Explore and align (3-6 months)
- Define target populations: lung cancer screening cohorts, ED chest CTs, inpatients 45+, and health plan-attributed members with low preventive care use.
- Set success metrics upfront: percent eligible patients scored, high-risk detection rate, statin initiation within 30 days, LDL reduction at 90 days, avoided cardiology visits for low-risk, and MI admissions per 1,000.
- Map workflows: where AI runs (on-prem PACS/VNA vs cloud), who gets alerts, how patients are notified, and how orders are placed.
- 2) Vendor evaluation and contracting (6-8 weeks)
- Compare clinical validation and external generalization, not just AUCs. Ask for institution-level pilot data and bias analyses by age, sex, race.
- Interoperability: DICOM ingestion; results as DICOM SR and FHIR Observation; EHR inbox routing; registry integration.
- Total cost of ownership: licenses plus compute, storage, integration, and change management. Typical ranges to benchmark: coronary imaging AI $200K-$1M+; cardiac imaging AI $150K–$750K; ECG-based AI $100K–$500K.
- Safety and governance: PHI handling, audit trails, model update cadence, human-in-the-loop policies, and a clear rollback plan.
- Exit strategy: data portability, model outputs retention, and migration assistance clauses.
- 3) Pilot with guardrails (6–12 months)
- Start with one intake stream (e.g., lung screening) and 2–3 primary care clinics. Limit to patients 45–80 with shared decision-making protocols.
- Use a tiered thresholds approach: auto-notify only for moderate/high CAC; suppress very low-risk to prevent alert fatigue.
- Embed clinical pathways: standing orders for lipid panels and statin starts; templated patient messages; pharmacist-led counseling; cardiology e-consults for edge cases.
- Validate impact: measure PPV/NPV, time-to-therapy, adherence at 90 days, and near-term utilization shifts (appropriate cardiology referrals, avoided unnecessary testing).
- 4) Phased full deployment (12–24 months)
- Expand to ED and inpatient chest CTs with closed-loop referrals and population health tracking.
- Automate registries and dashboards: cohorting by CAC tier, care gaps, and outcomes; quarterly model performance and equity reviews.
- Fine-tune incentives: align clinician comp and quality programs to preventive therapy initiation and adherence, not just referrals.
- 5) Innovate and integrate
- Combine CAC with claims/labs/EHR features to create a composite risk score that outperforms each signal alone.
- Explore step-up imaging only when management would change; avoid low-yield cascades.
- Consider research partnerships with Johns Hopkins, Radboud, Yale, or Columbia University Irving Medical Center for independent validation and publications.
Implementation Playbook: What good looks like
- Data and integration
- Where to run: Co-locate inference with PACS/VNA for latency and cost control; mirror a de-identified stream to cloud for R&D.
- Standards: DICOM in; DICOM SR + FHIR out. Results should land as discrete data in the EHR, not just a PDF.
- Routing: Results to a centralized prevention team inbox, not to already stretched radiologists or ad hoc PCP workflows.
- Clinician engagement
- Co-design the thresholds and messages with cardiology and primary care. Provide quick-reference guides for “CAC 0,” “moderate,” and “high” scenarios.
- Offer pharmacist-led initiation and titration for statins; use shared decision tools to address concerns.
- Governance and safety
- Human-in-the-loop review for edge cases; require over-read sampling to monitor drift.
- Equity guardrails: monitor performance and access by demographics; mitigate gaps with outreach and community partners.
- Regulatory compliance: treat as Software as a Medical Device; document intended use, validation, and change control.
- Finance and measurement
- Build the ROI model on preventable events and quality measures: increased appropriate statin use, LDL reductions, reduced MI admissions, fewer avoidable ED revisits.
- Acknowledge today’s reimbursement reality; avoid designing the program to chase low-value downstream procedures.
A realistic patient story: Turning an incidental flag into prevention
A 58-year-old bus driver gets a chest CT for persistent cough. The AI flags high CAC and posts a discrete result to the EHR. A prevention nurse calls within 48 hours; a pharmacist initiates a statin after a brief tele-visit; a primary care follow-up confirms adherence and lifestyle goals. LDL drops 35% in 90 days. No invasive testing is ordered because symptoms are absent and management wouldn’t change. That’s value: one incidental finding, zero noise, measurable risk reduction.

Risk Management: Where programs fail—and how to avoid it
- Data quality and availability: Heterogeneous scanners and protocols degrade performance. Mitigation: site-specific calibration and QA sampling.
- Model degradation: Without monitoring and periodic retraining, performance drifts. Mitigation: quarterly audits, vendor SLAs on updates, local revalidation before go-live.
- Clinician resistance: Fear of overload or liability. Mitigation: clear ownership, templated pathways, and malpractice coverage language.
- Vendor lock-in: Proprietary formats and embedded viewers. Mitigation: contract for standards-based outputs, data escrow, and termination assistance.
- Ethical and regulatory hurdles: Incidental findings can widen disparities. Mitigation: proactive equity metrics, multilingual outreach, and community partnerships.
Monday-Morning Actions: Move from curiosity to commitment
- Inventory your chest CT volume by service line (ED, inpatient, lung screening) and estimate the addressable cohort 45–80 years old.
- Convene a 60-day task force (radiology, cardiology, primary care, IT, population health, compliance) to define target use cases and metrics.
- Issue an RFI to Bunkerhill Health, Nanox.AI, HeartLung Technologies, and at least one academic collaborator; require external validation and interoperability proofs.
- Design a pilot around lung screening patients with standardized follow-up orders, pharmacist support, and patient messaging templates.
- Stand up dashboards for CAC tiers, therapy initiation, LDL changes, referrals, and MI admissions; commit to publishing results internally within six months.
- Negotiate contracts with TCO transparency, equity reporting, and an exit clause with data portability and migration support.
AI-driven CAC on routine chest CTs won’t solve cardiovascular disease on its own. But as part of a disciplined prevention service line—with the right thresholds, workflows, and guardrails—it can shift resources from crisis response to risk reduction. The prize isn’t a prettier report; it’s fewer heart attacks.



