Patient GenAI Isn’t the Threat. Uneducated AI Use Is.
We don’t need fewer patients using AI — we need smarter patients using AI, taught by the very systems that want better outcomes.
8:12 a.m., primary care. A patient hands over a crisp one-page summary their AI assistant helped draft: onset, duration, triggers, meds, and three precise questions. That isn’t “annoying” — that’s signal. The problem isn’t patients using GenAI. It’s patients using GenAI without an education layer — and that’s on us as leaders.
Patient AI-literacy is the new clinical safety standard. Without it, we enlarge inequity, amplify noise, and waste everyone’s time. With it, we raise comprehension, accelerate decisions, and improve adherence. Health literacy already predicts outcomes; AI-literacy is simply the 2025 upgrade.
The 5Ps: A Patient GenAI Competency Model (teach this everywhere)
• Purpose — Use GenAI for visit prep, translation, comprehension, and behavior support. Not for diagnosing, prescribing, or overriding clinicians.
• Provenance — Demand sources; prefer guidelines and peer-review; treat uncited claims as opinion.
• Prompts — Structure inputs like a clinician: onset, duration, triggers, severity (0 — 10), meds/allergies, relevant history, 3 questions.
• Privacy — No PHI (names, MRN, addresses, images) in public tools; prefer the consented portal assistant.
• Partnership — Bring AI outputs as starting points, not verdicts. Clinicians make decisions; AI drafts and coaches.
AI-literacy isn’t replacing clinicians; it’s replacing chaos.
Why this matters: decades of evidence show health literacy correlates with hospitalizations, ED use, and preventive care; teach-back improves comprehension and adherence. Translating those principles to GenAI is overdue.
From Policing to Pedagogy
Banning or shaming patient AI use doesn’t stop it; it just concentrates benefits among the already empowered. Education, standardized prompts, and consented tools turn curiosity into clinical signal. In other words: stop policing; start teaching.
A Composite Vignette (equity lens)
A 62-year-old, limited-English-proficient patient uses the pre-visit template in her portal’s assistant to build a one-pager (in her language, with sources). Her visit shortens; teach-back shows she can restate the plan; she leaves with a 7-day checklist. Professional language access and teach-back are associated with lower readmissions, shorter stays, and better comprehension — GenAI simply makes those workflows available between visits.
Safety & Workflow Guardrails
• Emergency bypass: Chest pain, stroke signs, severe breathlessness → skip AI, call nurse line/911.
• Approved tools only: Use the in-portal assistant or a consented app (governed, logged, auditable). WHO and NASEM emphasize strong governance and documented risks, including hallucinations — so design for provenance and oversight.
• No PHI in public AI: No names, MRNs, addresses, images.
• Sources or it didn’t happen: Uncited answers = opinion; prefer guideline-linked summaries.
• Clinician is final: AI drafts; clinicians decide and sign.
Make It Real: Copy-Paste Patient Prompts
• Pre-visit (history builder)
“Summarize my symptoms using onset, duration, triggers, severity (0 — 10), meds/allergies, relevant history and 3 concise questions for my clinician. Do not diagnose. If you explain anything, include links to reputable sources.”
• Post-visit (teach-back + adherence)
“Rewrite my care plan in plain language (≈6th-grade). Keep all numbers and medication names exact. Create a 7-day checklist with reminders, and list when to call my clinic vs when to go to urgent care.”
• Language support (precision translation)
“Translate these instructions into [language]. Keep medical terms and numbers exactly the same.”
The 6-Week Health-System Playbook
• Publish (Week 1) — One-page GenAI for Patients + 5Ps on your portal and at check-in.
Metric: % of visits where the handout was offered.
• Embed (Week 2) — In-portal, consented assistant with locked pre-visit and post-visit templates.
Metric: patient utilization rate.
• Train (Week 3) — Front desk/MAs prompt patients to bring one page, not ten.
Metric: % of visits with structured summary uploaded.
• Standardize (Week 4) — AI summary → clinician edit → chart (provenance preserved).
Metric: edit acceptance rate.
• Safeguard (Week 5) — Red-flag bypass, PHI scrubbing, default language access.
Metric: bypass capture accuracy.
• Measure (Week 6) — Publish baseline and deltas quarterly:
time-to-triage, visit-length variance, after-hours inbox per clinician, teach-back score, 30-day adherence markers (med fill, PT completion, RPM compliance).
Objections I Hear — And Why They Fail
• “This adds work.” Templated, structured inputs cut reconstruction time. Work moves upstream and becomes editing, not archaeology.
• “Liability risk.” Risk rises when confused patients guess. Education + consented tools + bypass rules lower risk and document diligence. WHO/NASEM both stress governance and transparency.
• “Misinformation will spread.” It already spreads. Provenance demands and locked templates contain and correct it.
The 90-Day Challenge
In three years, not teaching patient AI-literacy will read like not teaching insulin pens. If you run a health system, publish your Patient GenAI Education Policy within 90 days — templates, guardrails, metrics included.
Your move. Are you teaching patient AI-literacy — or still policing it? If you’re doing it, drop one metric you track and link your policy. If not, what’s the real blocker — governance, tools, culture, or courage? Comment CHECKLIST for the one-page templates, tag your CIO/CMIO, and tell me: what would it take to ship a Patient GenAI Education Policy in the next 90 days?
