StrengtheningMentorship& Retention
Leadership wanted a mentoring program to fix retention. The data pointed to something they weren't ready to hear — and saying it required as much strategy as finding it.
ORBC1-UC: Organizational Behavior · NYU · March 2026
I accepted a six-month assignment as an organizational behavior consultant at Global Growers, a midsize creative agency. Leadership had a clear ask: help them build a mentoring program for new hires. They believed it would improve retention.
Before recommending anything, I gathered feedback from three groups.
- Some leaders provide short-term cross-functional projects
- Some encourage employees to use the company's online learning program at their own pace
- All review internal promotion candidates quarterly — they have not found suitable internal candidates
- Employees do not consistently discuss career development with managers
- Employees feel they work in silos
- Leaders support conference attendance, but employees pay all costs themselves
- Former employees “did not find opportunities for growth”
- Managers “prioritized completing tasks instead of developing talent”
- Leaders “frequently required that employees work on weekends” — burnout was widespread
Leadership believed a new mentoring program would fix sentiment and retention. But the data told a more complicated story.
My central argument: mentoring alone won't work. The deeper problem is a weak development system — not a lack of employee ambition.
"Global Growers’ proposed mentoring program could improve retention, but it will not succeed if leadership treats mentoring as a stand-alone solution."
I recommended three structural changes, grounded in Bauer & Erdogan's organizational behavior framework:
The assignment required using Gemini to refine messaging for an audience that might resist criticism. The question wasn't whether the analysis was right — it was how to deliver it without making leadership shut down.
How do you tell leadership their development system is broken — without them hearing "you failed"?
This is where AI's role mirrors the organizational problem itself: both require navigating the gap between what needs to be said and what people are ready to hear.
Two prompts. Three versions. A progression from direct criticism to strategic framing.
"Will not succeed if leadership treats mentoring as a stand-alone solution."
Clear and evidence-based, but could trigger defensiveness in leadership.
"We have an opportunity to integrate mentoring into a more cohesive development framework."
Diplomatically effective but lost its edge — reads like a consulting deck, not a recommendation.
"Could improve retention, but its impact will be limited unless it is supported by a more coordinated employee development strategy."
Acknowledges leadership's intent while redirecting toward a broader strategy. Evidence and citations intact.
Prompt 1: Tone refinement
"I am preparing a one-page recommendation for a leadership team that may not be fully open to criticism. Please review the draft below and help me improve the messaging for an audience that could be defensive or resistant to change."
"Identify any wording that sounds too harsh, overly critical, or accusatory. Revise the tone so it sounds professional, constructive, and persuasive. Strengthen the message without weakening the recommendations. Preserve the core evidence and textbook citations."
Gemini identified four "red flags" and rewrote using "Partnership Language." Polished — but it went too far. The revision read more like a consulting deck than a direct recommendation.
Prompt 2: Pushing back
"Please revise the recommendation again so it sounds diplomatic and leadership friendly, but still specific and evidence based. Keep the core concerns about inconsistent development, siloed work, weak internal advancement, and burnout. Do not make the message vague or overly soft. Keep the direct quotes and APA-style citations."
This produced a version that struck the right balance: framing problems as systemic rather than personal, using the textbook citations as neutral third-party authority, while still naming the specific issues.
"Global Growers’ proposed mentoring program could improve retention, but its impact will be limited unless it is supported by a more coordinated employee development strategy."
The key shift: instead of saying leadership's approach "will not succeed," the revision acknowledges their intent while redirecting toward a broader strategy. The evidence and citations remain intact, but the framing changed from criticism to optimization.
System, not people
Instead of "leaders required weekends," the revision discusses "workload expectations" — focusing on the process rather than blaming individuals.
Citations as neutral authority
Bauer & Erdogan citations serve as expert backup, making recommendations feel like established best practices rather than personal criticism.
Specific, not soft
The revision still names the problems: inconsistent development, silos, burnout, weak advancement. It just frames them as solvable business challenges.
AI Over-Softens by Default
Gemini's first instinct was to eliminate friction entirely — turning direct statements into diplomatic platitudes. Useful for a first pass, but the human's job is to calibrate. Knowing when to push back on the AI is the skill.
Two Prompts Are Better Than One
The first prompt gave Gemini the wrong constraints ("help me soften this"). The second prompt corrected course ("stay specific and evidence-based"). Iterative prompting produced a significantly better result than a single attempt.
AI as Revision Partner, Not Author
The original analysis was mine — the argument, the evidence, the recommendations. Gemini's role was to pressure-test the tone for a specific audience. That's where AI adds the most value: refining human work, not replacing it.
Completed for ORBC1-UC: Organizational Behavior at NYU School of Professional Studies, March 2026.