August 14, 2025

AI Teammates, Part 3: Onboarding, Training, and Promotion

This is Part 3 of our AI Teammates series, where we explore various topics in the lifecycle of an AI teammate through the lens of Emma, an AI customer support teammate at a fictional airline. Part 1 — covering foundational concepts and design choices for AI teammates — can be found here. Part 2 (hiring AI teammates) can be found here.

Ardent
by
Ardent

This is Part 3 of our AI Teammates series, where we explore various topics in the lifecycle of an AI teammate through the lens of Emma, an AI customer support teammate at a fictional airline. Part 1 — covering foundational concepts and design choices for AI teammates — can be found here. Part 2 (hiring AI teammates) can be found here.

How do you “onboard” an AI teammate? What access and permissions does it have?

Once you’ve decided to bring on an AI teammate like Emma, the next step is onboarding. For human employees, this might mean getting a company laptop and setting up logins. For Emma, it’s less about office tours and more about defining her digital identity, configuring her access, and integrating her into existing workflows. Increasingly, IT is the new HR for AI teammates — responsible for getting them up and running securely and effectively.

As we discussed earlier, one of the first steps is deciding what kind of “person” Emma will be. Let’s say the airline wants her to have a full identity: an avatar and headshot, a synthetic voice, a phone number, an email address, and a Slack handle. If Emma is built in-house, the company’s IT function will be the one to provision these elements, just as HR and facilities might do for a new human hire.

Next comes access. Emma won’t be effective unless she’s plugged into the right systems. For a customer support role, she’ll need to access the airline’s ticketing platform, passenger itinerary database, refund system, and other core tools. She’ll also need communication channels like Slack, email, and a phone line so she can engage with both customers and colleagues.

But access isn’t binary — it’s about permissioning. Just like a junior employee, Emma should have the ability to do her job without being able to inadvertently cause harm. She should be able to rebook a passenger and issue a refund, but not bump another customer from a flight or access sensitive HR systems. Setting these permissions correctly is essential not only for security, but for trust — both among human coworkers and the customers she interacts with.

How do you “train” an AI teammate?

After Emma has been onboarded and granted access to the tools and systems she needs, the next step is training her to actually do the job. For a human customer support agent, this might involve shadowing a colleague, studying company policies, and learning how to manage tricky customers. For AI teammates, it’s not so different — just more technical under the hood.

Training an AI teammate starts by feeding the underlying model examples of what “good work” looks like. In Emma’s case, that might include transcripts from past support calls, documentation of standard resolution workflows, escalation paths, and internal FAQs. She needs to learn the airline’s approach to common issues like delays, cancellations, rebookings, accommodations, and lost luggage. And just as importantly, she needs to learn how to communicate with customers in a tone that reflects the airline’s brand.

If a customer calls about a cancelled flight, Emma should be able to diagnose the issue, recommend a solution, and deliver the message with appropriate empathy and professionalism. Just like a new hire, she may stumble early on, but she can improve with targeted feedback and coaching.

That feedback loop is crucial, and it doesn’t stop after the first few days. Training AI teammates is an ongoing process. Take Ada, an AI customer service company: their team regularly holds “coaching sessions” for their AI agents, reviewing transcripts and updating configuration files based on metrics like Automated Resolution Rate and CSAT (Customer Satisfaction Score). Our fictional airline might do the same for Emma. If CSAT scores dip for weather-related cancellations, the team might review Emma’s recent calls and tweak her responses to better handle those conversations.

We’re also seeing cutting-edge approaches that go beyond traditional training. aiXplain, for example, uses an “Evolver” agent that spawns hundreds or even thousands of slightly different versions of the same AI teammate. These versions are run through a battery of tasks, and the best performers are selected and refined further. Imagine training an army of Emmas — each with slightly different strategies for resolving a particular issue — and deploying the one that proves most capable with real customers.

How do AI teammates get “promoted”?

Let’s say Emma is thriving in her role. She’s fast, accurate, and consistently receives strong CSAT scores. Even though she’s an AI teammate, her performance might still warrant what we’d call a promotion. But for Emma, moving up doesn’t mean a title change or a bigger paycheck. Her version of career progression looks more like an expanded perimeter: handling more complex tasks, managing a higher volume of inquiries, or gaining access to additional tools.

In practice, that might mean Emma graduates from processing routine refund requests to managing time-sensitive, high-stakes rebooking scenarios during major weather disruptions. Or perhaps the airline decides to give her supervisory duties — monitoring a pod of simpler AI teammates, reviewing their output, and flagging anomalies for human review. Just like a human employee stepping into a new role, Emma needs additional training to take on new responsibilities. And just like humans, her prior experiences shape how she approaches the work.

Sometimes, promotion for an AI teammate takes a more technical form. If Emma has consistently struggled with understanding tone in customer conversations, the airline might replace her underlying model with one that handles sentiment detection more effectively. Or, after a successful pilot phase, they might upgrade her to a more advanced multimodal model with real-time speech capabilities and broader knowledge access. In both cases — whether it’s expanded duties or a model swap — the logic remains the same: performance leads to new opportunities.