AI + EthicsHealthcareGrade: A

When AI Meets Ethics

I built an AI ethics consultant to navigate a hospital's moral dilemma. Then I argued with its recommendations.

HUMANJudgmentHOSPITALDilemma?$AI ETHICSCONSULTANT

The Scenario

A 450-bed Chicago hospital. 100,000 patients a year. One uncomfortable question:

"Should we limit appointments for uninsured patients so that insured patients—especially Medicare patients whose satisfaction scores affect our funding—can be seen faster?"

The hospital's access satisfaction score sat at 60%. Leadership wanted it higher. Their proposed solution? Restrict how many appointments uninsured, self-pay patients could book.

The Director of Patient Access pushed back. It felt wrong. But "feeling wrong" isn't an argument in a boardroom. I was asked to find one.

My Approach

Instead of just analyzing this case myself, I wanted to test a question that's becoming increasingly urgent:

Can AI help humans make better ethical decisions—or does it just automate our biases faster?

The Persona Design

I engineered an AI persona from scratch: Dr. Amina Solberg, a virtual healthcare ethics consultant built on Claude Opus 4.5.

PERSONA ARCHITECTURE

Identity: Dr. Amina Solberg
         AI Healthcare Ethics & Innovation Consultant

Expertise Layers:
├── Bioethics Foundation
│   ├── Autonomy (patient choice)
│   ├── Beneficence (promote well-being)
│   ├── Non-maleficence (avoid harm)
│   └── Justice (fair distribution)
│
├── Business Ethics Integration
│   ├── Stakeholder theory
│   ├── Corporate social responsibility
│   └── 7-step decision-making model
│
└── Healthcare Operations
    ├── Scheduling systems
    ├── Triage protocols
    └── AI/data ethics in clinical workflows

Communication Style:
- Step-by-step reasoning (transparent logic)
- Names principles explicitly
- Acknowledges complexity
- Never claims "one perfect answer"

The Consultation

What the AI Got Right

The hospital leadership framed this as: "uninsured patients vs. insured patients."

Dr. Solberg reframed it as: "Is the access problem actually caused by uninsured patients—or by systemic capacity issues?"

This matters. The proposed solution assumed uninsured patients were "crowding out" others. But what if the real problem was inefficient scheduling, high no-show rates, or insufficient clinic hours?

Instead of accepting the binary choice (restrict access vs. do nothing), the AI produced seven alternatives:

OptionDescription
AImplement the restriction (cap uninsured appointments)
BMaintain status quo
CExpand capacity (hours, providers, rooms)
DImprove scheduling efficiency (no-show management, same-day reserves)
ECreate differentiated access pathways
FProactive insurance enrollment assistance
GHybrid approach combining D, C, and F

The AI recommended Option G—addressing the problem without discrimination.

For each bioethical principle, the AI mapped who would be affected:

Justice: Uninsured patients lose access based on ability to pay, not clinical need
Non-maleficence: Delayed care leads to disease progression, ER overcrowding
Autonomy: Restricting appointments removes patient choice
Beneficence: Leadership's goal helps some patients at the expense of others

Where I Disagreed with the AI

Dr. Solberg's analysis was structured and thorough—but it wasn't flawless.

Blind Spot #1: Coercion Risk in "Helpful" Solutions

The AI recommended proactive insurance enrollment assistance—helping uninsured patients sign up for Medicaid or marketplace plans. On the surface, this seems beneficial. But I pushed back:

"Offering enrollment assistance could unintentionally pressure patients into coverage decisions that don't align with their financial situation or values. The principle of autonomy means respecting patients' right to make their own choices—including the choice to remain uninsured."

Blind Spot #2: Feasibility Optimism

Dr. Solberg placed significant weight on capacity expansion—adding clinic hours, hiring providers, opening exam rooms. The problem? Hospitals can't expand overnight.

"Relying too heavily on expansion reflects optimism rather than a realistic, evidence-based step. The course materials warn against 'ignorance' in ethical reasoning—overlooking what's actually achievable."

My Final Recommendation

The Decision

Reject the restriction on uninsured patient appointments. Limiting access based on insurance status violates justice and risks non-maleficence.

The Action Plan

1

Immediate: Scheduling Audit

Conduct data-driven analysis to identify actual causes of access delays; implement no-show management and standby lists

2

Short-term: Clinical Triage System

Ensure urgent cases are prioritized regardless of payer status, maintaining fairness across all patient groups

3

Medium-term: Capacity Evaluation

Evaluate expansion options based on audit findings; transparent communication with leadership

4

Ongoing: Quarterly Reviews

Track satisfaction scores, wait times, and payer mix trends; adjust approach based on outcomes

The Bigger Picture

AI as Scaffold, Not Authority

Dr. Solberg helped me see the problem from multiple angles and generate options. But AI can't assess real-world feasibility or detect subtle coercion.

Human Role Isn't Going Away

The most valuable part wasn't the AI's output—it was my critique. Knowing when to agree, push back, and synthesize: that's the skill that matters.

Ethics is Operations

The best solutions aren't "more ethical" in a pure sense—they're ethically sound AND practically achievable.

This project was completed for LRMS1-UC: Business Ethics at NYU School of Professional Studies, December 2025.

Interested in AI ethics, healthcare technology, or persona engineering?

Let's Connect