Treat AI Like a New Hire — How to be a Better Boss to Both
Let me start by saying, I am someone who arrived to technology early and eagerly.
My father, Dr. Alberto José Jiménez, was a Cuban immigrant, an IBM engineer, and one of the earliest personal computer game creators in the country. I grew up in a household where the TRS-80 from Radio Shack machine on the home desk wasn’t a mystery — it was a playground. I was writing programs in Basic code before most of my peers knew what a cursor was. When the internet arrived, I had already spent years dialing into dial-up BBS sites and so I was already there. When social media emerged, I was an early adopter. When mobile changed everything, I adapted. New technology has never frightened me. If anything, I’ve always felt a pull toward it — that particular excitement of being at the edge of something that hasn’t fully formed yet.
So when ChatGPT arrived, I didn’t approach it with caution. I approached it the way I approach most new tools: with genuine curiosity and a certain confidence that I’d figure it out. And I did. What followed was genuinely useful — I could draft faster, think through problems from new angles, and generate options I wouldn’t have reached on my own. For someone who runs two businesses, facilitates complex human conversations for a living, and is perpetually working on seventeen things at once, the productivity gains were real. I was impressed.
And then I started paying attention.
The more I read — about who built these systems, how they were built, and what got sacrificed along the way — the more I felt a familiar discomfort. The kind that shows up when values and behavior aren’t quite aligned. Karen Hao’s *Empire of AI* laid it out with unflinching clarity: the race to build AI at scale has come at enormous cost to workers in the Global South, to the environment, and to the democratic oversight we’d need to govern it responsibly. Dr. Joy Buolamwini’s *Unmasking AI* brought it closer to home — showing how bias gets encoded into systems that then make consequential decisions about real people’s lives. Hiring. Housing. Healthcare. Criminal justice. The systems we trust with the highest-stakes moments are built on data that reflects the world’s inequities right back at us.
I couldn’t unknow any of that. And I couldn’t keep using tools uncritically when I knew what I knew.
Here’s what made this moment different from every other technology transition I’ve navigated: being an early adopter wasn’t enough. Being fast and capable with a tool doesn’t matter if the tool itself is causing harm. My father taught me that technology is a form of creative power — and power, as any good leader knows, comes with responsibility.
So I made a deliberate shift. I moved my primary AI work to Claude, built by Anthropic — a company whose stated mission centers on AI safety and whose model is designed with Constitutional AI principles, meaning it’s trained to be helpful, harmless, and honest. Was it a perfect solution? No. But it was a values-aligned one. And for someone who has spent years helping organizations close the gap between stated values and actual behavior, that distinction matters enormously.
Managing AI Like a Better Boss
Here’s what I’ve learned since making that shift: the way you work with AI reveals a lot about the way you manage.
I treat Claude the way I’d treat a brilliant new team member — one who is extraordinarily capable, deeply knowledgeable, and also new to a particular context,goal, client, culture, and values. That means I don’t just issue commands and accept output. I bring the same behaviors I’d bring to any strong working relationship.
The five management behaviors that define effective, ethical leadership translate directly to working well with AI:
1. Clarity of context and intention.
Strong managers don’t leave their team guessing about purpose. They explain the “why” behind the work, not just the “what.” With AI, this means giving rich context — who this is for, what it needs to accomplish, what tone serves it, what constraints matter. The more we bring to the prompt, the more the output reflects actual vision and goals rather than a generic approximation of it.
2. Active oversight, not passive consumption.
Ethical leaders don’t delegate and disappear. They stay engaged, ask questions, and review what comes back with a critical eye. I read everything AI produces before it goes anywhere near a colleague or client. I fact-check. I rewrite. I push back when something feels off. The output is a starting point, not a finished product — and I’m the one accountable for the final results.
3. Honest feedback loops
Good managers tell their team when something isn’t working. They don’t accept mediocre output to avoid friction. I do the same with AI — I redirect, refine, and ask again. “That’s not quite right — here’s what I actually need” is a complete sentence, and it works just as well in a Claude conversation as it does in a one-on-one.
4. Values as a filter, not an afterthought.
The strongest leaders I know don’t treat ethics as a compliance exercise. Their values show up in how they make decisions, not just in what they say they believe. Choosing which AI tools to use, and how to use them, is itself an ethical act. A team’s values can also be an effective filter for the content and tone of the work. Every tool, every sentence we adopt reflects something about what we’re willing and able to support.
5. Investment in the relationship over time.
Skilled managers build relationships that get better with context and history. The more a team member understands the vision and goals, the more useful they become. Working with AI is no different — the more I bring real work with specific challenges, the more the collaboration sharpens. It’s a relationship worth tending.
What Humans Bring That AI Cannot
All of this brings me to something my colleague John Honingford wrote about recently — a framework called EPOCH that he encountered through MIT Sloan researcher Isabella Loaiza that stopped me mid-scroll.
EPOCH: five durable human skills that don’t compete with AI but complement it.
Empathy
Presence and persuasion
Ownership
Creativity
Humility
John frames these not as “soft skills” — a phrase that has always undersold the hardest work humans do — but as human-power skills. The capabilities that keep us relevant, valuable, and irreplaceable in a workplace that AI is reshaping in real time.
As a coach and facilitator, I’d argue these five skills are also the foundation of everything I do. They’re what make a room feel safe enough for real conversation. They’re what turn a workshop from information delivery into actual transformation.
Empathy is how I hold someone’s experience without collapsing into it or rushing past it. Persuasion — real persuasion, not manipulation — is how I help leaders see a new way forward when they’re stuck in old patterns. Ownership is what I model when I take responsibility for a session that didn’t land the way I intended. Creativity is how I design learning experiences that meet people where they are, not where I wish they were. And humility is the practice underneath all of it — knowing I don’t have all the answers, staying curious, and letting the work teach me.
These skills aren't abstract ideals — they show up in real work, sometimes the hard way. Recently I was designing an e-learning program for a client and leaned into AI-generated images and AI voiceovers to move faster. The client loved the concept in preview. Then I ran a pilot session with a diverse group of staff members — real humans with real eyes and real ears — and the feedback was swift and unanimous. They hated the images. They found errors I had missed entirely: an AI had cheerfully dropped the Chrysler Building into a San Francisco skyline, apparently unbothered by geography. The voiceovers felt hollow and strange. Nobody could name exactly why, but everyone could feel it. I spent the next several hours replacing AI-generated images with actual photographs and re-recording the narration in my own voice. The session landed completely differently. The lesson was clarifying: when people can tell, and it becomes a distraction, it isn't a good use of AI — full stop. Every one of the EPOCH skills was implicated in that moment. Empathy told me to listen to what the room was actually experiencing. Ownership meant not defending the shortcut I had taken. Creativity meant finding a better solution rather than a faster one. Humility meant admitting that moving quickly had cost the work its integrity. And Persuasion — real persuasion — requires a human voice that people can feel, not one assembled from data points. AI is a powerful collaborator. It is not a replacement for the judgment that knows when to use it and when to put it down.
AI can help me draft, research, structure, and synthesize. It cannot sit with someone in their uncertainty and know when to speak and when to stay quiet. It cannot read the room. It cannot care.
This remains ours.
The conversation about AI is often framed as a binary — threat or salvation, replace or be replaced. I’ve found a third way: use it with intention, manage it with the same ethical clarity you’d bring to any leadership relationship, and keep sharpening the human skills that no model can replicate.
The books are worth reading. The frameworks are worth learning. And the question John ends with is worth sitting with:
How are you using and building your EPOCH skills?
I’m curious what’s true for you.
*For more on the EPOCH framework, read John Honingford’s full post on LinkedIn here.
Tres Jiménez is a coach, facilitator, and consultant working at the intersection of leadership, organizational culture, and human development.