
At the end of 2024, we made seven predictions about the year ahead in AI. Now, halfway through the year, one has come convincingly true: the rise of digital AI workers , what some are calling AI teammates.
These are not just smarter software tools or ambient copilots tucked into the corners of your browser. They are full-fledged digital colleagues, with names and job titles, voices and avatars, Slack log-ins and performance metrics. Alice is an outbound sales rep from 11x. Olivia from Paradox recruits candidates for clients. Conversica offers entire teams of customer service agents. Ardent’s own portfolio includes several companies offering AI employees: e:cue has Cue, an embedded AI data expert for executives, and Drillbit has Mason, an AI phone receptionist for home services businesses. What once belonged to the realm of speculative fiction is now quietly embedded in company org charts.
As the number of AI teammates grows, and with them, a cottage industry of articles, panels, and white papers, we want to offer our own view on the questions emerging from this moment. What exactly is an AI teammate? How do they fit into workplaces designed for, and, in many cases, still beholden to humans? What becomes of the manager, the IT lead, the teammate on the other end of the Zoom call?
In this series, we explore those questions through the lifecycle of Emma, an AI customer support agent at a fictional airline. Emma is an AI teammate, but like her human colleagues, she is hired, onboarded, trained, evaluated, and supported by a team. By following her journey, we hope to offer a grounded and human look at a trend that’s captivating startups, enterprises, and everyday observers alike:
- Part 1 lays the foundation for what AI teammates like Emma are.
- Part 2 explores how AI teammates are hired.
- Part 3 discusses onboarding, training, and promotions.
- Part 4 delves into human-AI management.
- Part 5 concludes and covers a few edge cases that we are tracking.
A quick note on terminology
Throughout this series, we use the term “AI teammates” to refer to digital AI workers. We considered alternatives — “virtual workers,” “digital workers,” and a handful of others — but found each came with baggage. “Virtual workers,” for instance, evokes the millions of remote employees logging on from their homes in the thick of the pandemic. “Digital workers” often refers, in corporate usage, to human employees working in tech-adjacent roles. The field remains too new for consensus, but for now, “AI teammate” feels closest to what these systems are becoming.
What is an AI teammate? How does it differ from a co-pilot or an agentic workflow?
Before we can understand what an AI teammate does, we need to clarify what, exactly, one is. In the modern workplace, AI exists on a continuum, ranging from passive assistant to autonomous actor.
At one end sit the co-pilots: helpful, unobtrusive, and firmly under human control. These are the tools embedded in everyday software, suggesting email replies in Outlook or summarizing meeting notes in Google Docs. They nudge productivity upward, but the human remains firmly in the driver’s seat.
A step further along the spectrum are agentic workflows — systems in which AI agents follow a defined sequence of actions to complete a specific task. These workflows might allow for modest branching logic or decision trees, but at heart they remain structured, linear, and bounded. Think of them as flowcharts with a little improvisation at the edges.
AI teammates, however, occupy a different category altogether. They are not merely executing a script or selecting from a menu of preset actions. They operate in ambiguity. They process varied inputs, make decisions in real time, adjust strategies on the fly, and choose their own tools. Their behavior resembles that of a human colleague — albeit one still finding their footing — rather than that of a machine.
Take Emma, our fictional airline customer support agent. If a traveler contacts the airline after their flight has been canceled, Emma’s role — and level of autonomy — depends on her modality.
- As a co-pilot, Emma is a behind-the-scenes assistant, suggesting responses to a human agent fielding a chat inquiry.
- As an agentic workflow, she might act more independently. Upon receiving the customer’s email, she could verify the cancellation, rebook the traveler on the next available flight, and issue a hotel voucher. Efficient, yes — but still constrained by rigid logic.
- As an AI teammate, Emma would go further. She might handle a phone call directly, initiate the rebooking and hotel arrangements, and — crucially — tailor her response based on the traveler’s loyalty status, preferences, or travel history. She may even modulate her tone, style, and personality to better suit the customer’s emotional state. Emma is not merely executing a plan; she’s making her own.
To be sure, the boundaries among these categories are increasingly porous. Co-pilots are acquiring agentic traits, agentic workflows are gaining complexity, and many so-called “AI teammates” today are little more than sophisticated wrappers around agentic workflows.
Still, design matters. How an AI teammate is built, its capabilities, freedom to act, and degree of personality shape how it fits into a workplace and relates to its human peers. That, in turn, is the subject of our next section.
What identity and design choices need to be made for AI teammates?
If you’ve ever named your Roomba or thanked Siri for directions, you’ve experienced the instinct to anthropomorphize technology — to grant personality to the inanimate. With AI teammates, that impulse becomes more than a quirk of psychology. It becomes a design decision.
Designing an AI teammate involves a series of practical, even bureaucratic, choices. Does the AI have a name? A voice? A face? Will it attend meetings, speak aloud, send emails from its own address? Or will it live silently in the backend, operating invisibly through centralized systems? These decisions, seemingly cosmetic, shape how human coworkers and customers interact with the AI day to day, and whether they treat it as colleague, contractor, or code.
Consider Emma, our fictional airline customer support agent. One version of Emma is largely anonymous: she has no name or avatar, no email signature or presence in the company directory. Her work — processing refunds, rebookings, and updates — happens behind the scenes. Customers receive messages from a generic customer service account. Few, if any, know she exists.
Another version of Emma is more visible, and more humanlike. She has a name and profile photo, her own voice and phone line. She joins team meetings. Her status updates sound familiar: “I’ve resolved 80% of refund requests from last night’s cancellation.” Her coworkers speak about her the way they would a junior colleague: “Let’s have Emma handle it.”
How an AI teammate is designed ultimately determines how it is perceived and how it is treated. A teammate built to resemble a tool will be managed like software: monitored, patched, and replaced when outdated. But one designed to resemble a peer may be granted something closer to trust. And in time, perhaps, responsibility.

