When AIMeets Ethics
I built an AI ethics consultant to navigate a hospital's moral dilemma. Then I argued with its recommendations.
A 450-bed Chicago hospital. 100,000 patients a year. One uncomfortable question:
"Should we limit appointments for uninsured patients so that insured patients—especially Medicare patients whose satisfaction scores affect our funding—can be seen faster?"
The hospital's access satisfaction score sat at 60%. Leadership wanted it higher. Their proposed solution? Restrict how many appointments uninsured, self-pay patients could book.
The Director of Patient Access pushed back. It felt wrong. But "feeling wrong" isn't an argument in a boardroom. I was asked to find one.
Instead of just analyzing this case myself, I wanted to test a question that's becoming increasingly urgent: can AI help humans make better ethical decisions—or does it just automate our biases faster?
I engineered an AI persona from scratch: Dr. Amina Solberg, a virtual healthcare ethics consultant built on Claude Opus 4.5.
PERSONA ARCHITECTUREIdentity: Dr. Amina SolbergAI Healthcare Ethics & Innovation ConsultantExpertise Layers:├── Bioethics Foundation│ ├── Autonomy (patient choice)│ ├── Beneficence (promote well-being)│ ├── Non-maleficence (avoid harm)│ └── Justice (fair distribution)│├── Business Ethics Integration│ ├── Stakeholder theory│ ├── Corporate social responsibility│ └── 7-step decision-making model│└── Healthcare Operations├── Scheduling systems├── Triage protocols└── AI/data ethics in clinical workflows
The hospital leadership framed this as: "uninsured patients vs. insured patients."
Dr. Solberg reframed it as: "Is the access problem actually caused by uninsured patients—or by systemic capacity issues?"
This matters. The proposed solution assumed uninsured patients were "crowding out" others. But what if the real problem was inefficient scheduling, high no-show rates, or insufficient clinic hours?
Instead of accepting the binary choice, the AI produced seven alternatives:
The AI recommended Option G—addressing the problem without discrimination.
Justice: Uninsured patients lose access based on ability to pay, not clinical need
Non-maleficence: Delayed care leads to disease progression, ER overcrowding
Autonomy: Restricting appointments removes patient choice
Beneficence: Leadership's goal helps some patients at the expense of others
Blind Spot #1: Coercion Risk
The AI recommended proactive insurance enrollment assistance—helping uninsured patients sign up for Medicaid or marketplace plans. On the surface, this seems beneficial. But I pushed back:
"Offering enrollment assistance could unintentionally pressure patients into coverage decisions that don't align with their financial situation or values."
Blind Spot #2: Feasibility Optimism
Dr. Solberg placed significant weight on capacity expansion. The problem? Hospitals can't expand overnight.
"Relying too heavily on expansion reflects optimism rather than a realistic, evidence-based step."
Reject the restriction on uninsured patient appointments. Limiting access based on insurance status violates justice and risks non-maleficence.
AI as Scaffold, Not Authority
Dr. Solberg helped me see the problem from multiple angles and generate options. But AI can't assess real-world feasibility or detect subtle coercion.
The Human Role Isn't Going Away
The most valuable part wasn't the AI's output—it was my critique. Knowing when to agree, push back, and synthesize: that's the skill that matters.
Ethics is Operations
The best solutions aren't "more ethical" in a pure sense—they're ethically sound AND practically achievable.
Completed for LRMS1-UC: Business Ethics at NYU School of Professional Studies, December 2025.