Case StudyGrade: A

When AIMeets Ethics

I built an AI ethics consultant to navigate a hospital's moral dilemma. Then I argued with its recommendations.

HUMANHOSPITAL?$AI Ethics Consultant
The Scenario

A 450-bed Chicago hospital. 100,000 patients a year. One uncomfortable question:

"Should we limit appointments for uninsured patients so that insured patients—especially Medicare patients whose satisfaction scores affect our funding—can be seen faster?"

The hospital's access satisfaction score sat at 60%. Leadership wanted it higher. Their proposed solution? Restrict how many appointments uninsured, self-pay patients could book.

The Director of Patient Access pushed back. It felt wrong. But "feeling wrong" isn't an argument in a boardroom. I was asked to find one.

Approach

Instead of just analyzing this case myself, I wanted to test a question that's becoming increasingly urgent: can AI help humans make better ethical decisions—or does it just automate our biases faster?

I engineered an AI persona from scratch: Dr. Amina Solberg, a virtual healthcare ethics consultant built on Claude Opus 4.5.

PERSONA ARCHITECTURE
 
Identity: Dr. Amina Solberg
AI Healthcare Ethics & Innovation Consultant
 
Expertise Layers:
├── Bioethics Foundation
│ ├── Autonomy (patient choice)
│ ├── Beneficence (promote well-being)
│ ├── Non-maleficence (avoid harm)
│ └── Justice (fair distribution)
├── Business Ethics Integration
│ ├── Stakeholder theory
│ ├── Corporate social responsibility
│ └── 7-step decision-making model
└── Healthcare Operations
├── Scheduling systems
├── Triage protocols
└── AI/data ethics in clinical workflows
What the AI Got Right

The hospital leadership framed this as: "uninsured patients vs. insured patients."

Dr. Solberg reframed it as: "Is the access problem actually caused by uninsured patients—or by systemic capacity issues?"

This matters. The proposed solution assumed uninsured patients were "crowding out" others. But what if the real problem was inefficient scheduling, high no-show rates, or insufficient clinic hours?

Instead of accepting the binary choice, the AI produced seven alternatives:

AImplement the restriction (cap uninsured appointments)
BMaintain status quo
CExpand capacity (hours, providers, rooms)
DImprove scheduling efficiency (no-show management)
ECreate differentiated access pathways
FProactive insurance enrollment assistance
GHybrid approach combining D, C, and F

The AI recommended Option G—addressing the problem without discrimination.

Justice: Uninsured patients lose access based on ability to pay, not clinical need

Non-maleficence: Delayed care leads to disease progression, ER overcrowding

Autonomy: Restricting appointments removes patient choice

Beneficence: Leadership's goal helps some patients at the expense of others

Where I Disagreed

Blind Spot #1: Coercion Risk

The AI recommended proactive insurance enrollment assistance—helping uninsured patients sign up for Medicaid or marketplace plans. On the surface, this seems beneficial. But I pushed back:

"Offering enrollment assistance could unintentionally pressure patients into coverage decisions that don't align with their financial situation or values."

Blind Spot #2: Feasibility Optimism

Dr. Solberg placed significant weight on capacity expansion. The problem? Hospitals can't expand overnight.

"Relying too heavily on expansion reflects optimism rather than a realistic, evidence-based step."

Recommendation

Reject the restriction on uninsured patient appointments. Limiting access based on insurance status violates justice and risks non-maleficence.

01
Immediate: Scheduling AuditIdentify actual causes of access delays; implement no-show management and standby lists
02
Short-term: Clinical TriagePrioritize urgent cases regardless of payer status
03
Medium-term: Capacity EvaluationEvaluate expansion based on audit findings
04
Ongoing: Quarterly ReviewsTrack satisfaction scores, wait times, and payer mix trends
Takeaway

AI as Scaffold, Not Authority

Dr. Solberg helped me see the problem from multiple angles and generate options. But AI can't assess real-world feasibility or detect subtle coercion.

The Human Role Isn't Going Away

The most valuable part wasn't the AI's output—it was my critique. Knowing when to agree, push back, and synthesize: that's the skill that matters.

Ethics is Operations

The best solutions aren't "more ethical" in a pure sense—they're ethically sound AND practically achievable.

Completed for LRMS1-UC: Business Ethics at NYU School of Professional Studies, December 2025.

BackAI