
TL; DR. Too long, didn’t read
- Ardent engaged with over 50 leaders from Fortune 1000 companies and top financial institutions on the risks of GenAI adoption and how to think about governance.
- About 49% of employees use GenAI tools like ChatGPT, highlighting the need for governance. Without adoption, employees will turn to unsanctioned usage, given the value of the tools
- Risks include privacy, security, accuracy, reputational damage, regulatory compliance, and ethical use.
- There are varied approaches to adoption and governance: ‘fast followers’ are cautious and measured, while ‘pioneers’ embrace rapid adoption.
- Companies can approach governance across three dimensions: people, processes, and technology.
- On the people front, employee training, access controls, and investing in AI/ML talent are crucial for mitigating risks.
- When it comes to processes, companies need to consider use cases, documentation, and approvals.
- Lastly, technology considerations include choosing LLM providers and working with governance partners, with fast followers waiting for established frameworks and pioneers developing their own solutions.
- Effective GenAI adoption and governance depend on company culture, innovation approaches, leadership, and existing governance protocols.
Overview.
The conversation around Large Language Models (LLMs) and their impact on business and workforce dynamics has reached a fever pitch in the past year. The ease of use, wide-ranging applications, and increasing adoption rates of generative AI (GenAI) technologies have placed them at the forefront of corporate strategies and boardroom discussions. As a result, companies are actively evaluating and implementing GenAI.
This rapid adoption has its challenges.
Companies are grappling with significant risks such as potential leaks of intellectual property, the propagation of inaccurate information due to model errors, and legal repercussions, including fines for violating practices laws like EEOC in hiring processes using GenAI. Companies will need a governance, risk, and compliance framework to manage this. Doing nothing is not an option given the immense value of GenAI and its high utility to employees, who are likely to adopt these tools regardless of official policies.
Generative AI Offers Significant Benefits to the Enterprise
We previously wrote about the immense value of GenAI to the enterprise. Early examples include document synthesis, code generation support, workflow automation, customer service, and personalized marketing strategies.
Employee Unsanctioned Use
Salesforce estimates that about 49% of people have already used GenAI tools, with over a third using them daily. Employees increasingly rely on tools like ChatGPT for tasks such as drafting copy, emails, and code and creating marketing images. The growing usage of GenAI underlines the importance for businesses to implement governance plans to mitigate associated risks. With 52% of individuals reporting an increase in their use of GenAI compared to when they first started, employees will use these tools regardless of company policy.
Risk of Generative AI Adoption
We have learned through our conversations with business leaders that their concerns center on the following:
- Privacy and Data Leakage: Leaders are concerned that integrating Generative AI might expose sensitive corporate data, risking privacy breaches and unauthorized data leakage.
- Security Risk (prompt injection): There is apprehension about the potential for prompt injection attacks, where malicious inputs could manipulate AI responses, posing significant security risks.
- Validation and Prevention of Hallucination: Ensuring the accuracy of AI-generated content and preventing ‘hallucinations’ or false information generation remains a key concern for maintaining information integrity.
- Reputational Risk: Misusing or misinterpreting AI-generated content could lead to reputational damage, a worry for companies aiming to maintain public trust.
- Regulatory Compliance: Adhering to evolving regulatory frameworks around AI usage poses a challenge, with leaders needing to ensure compliance to avoid legal repercussions.
- Ethical Usage: There is a growing emphasis on the ethical use of Generative AI, ensuring it aligns with company values and societal norms, avoiding biases and unfair outcomes.
These concerns are even more acutely felt by businesses in regulated industries.
Governance and Risk Management Framework
In the fast-evolving realm of GenAI, a pivotal question for enterprises is no longer if, but tactically, how they will integrate it across the organization. For many, the answer hinges on their current posture in the AI adoption landscape. Are they ‘pioneers’, eagerly embracing the new tech wave, or cautious ‘fast followers’, preferring to observe before leaping?
Fast followers typically exhibit a prudent ‘wait and see’ strategy. They adhere to traditional operational frameworks, introducing GenAI tools at a measured pace and rolling out tools on a longer timeline than their counterparts. This approach allows for a more comprehensive understanding of the technology’s implications before full-scale deployment. While they are certainly more risk-averse, fast-followers are not avoiding adoption; they understand the value and take careful steps to implement.
In contrast, organizations that embrace GenAI are pushing boundaries and fast-tracking adoption. These pioneers benefit from robust leadership support and a culture that prizes innovation. They have already laid the groundwork with established protocols for emerging technologies and have broadly adopted AI. This proactive and rapid adoption strategy positions them to harness the potential of GenAI to drive outcomes internally and externally.
Ardent has observed companies managing GenAI governance along three dimensions: people, processes, and technology. The approach to GenAI integration varies significantly across these companies, mirroring their corporate cultures and strategic imperatives.

People.
A company’s workforce is its first defense in mitigating the risks associated with Generative AI adoption. This approach encompasses three key strategies: training, access, and expertise. Fast followers prioritize comprehensive training, equipping employees to navigate LLM-based tools effectively. However, pioneers take training further, ensuring continuous, real-time instruction that embeds GenAI practices into the everyday work culture.
Regarding access to GenAI tools, fast followers tend to restrict usage to specific teams by function, such as developers and management, to maintain control and mitigate risks. In contrast, pioneers are more inclined to allow enterprise-wide access, reflecting their broader and more confident approach to GenAI integration.
Finally, the investment in talent is where the paths of fast followers and pioneers diverge significantly. Fast followers generally focus on upskilling their existing workforce and supplementing their capabilities with consultants. Conversely, pioneers are more likely to invest heavily in hiring in-house AI/ML experts. This approach brings specialized knowledge into the organization and signals a long-term commitment to integrating GenAI at the core of its business operations.
Process.
Integrating GenAI requires a robust process and policy framework. Key considerations include understanding the use cases, documentation plan, and approvals process. Should initial access be limited for testing in a sandbox environment? What documentation is needed, and who will be responsible for its procurement and management?

Use Cases: As businesses navigate the GenAI landscape, identifying the proper use cases is multifaceted. Let’s investigate the key considerations shaping how companies deploy GenAI tools.
- Proximity to Customer: The role of GenAI in the customer-facing elements of a product is a critical consideration. Fast followers are typically conservative, applying GenAI primarily in back-office operations like software development and HR. In contrast, pioneers are more versatile, integrating GenAI in customer-facing scenarios, underscoring a broader application spectrum.
- Automation Adoption Confidence: The degree to which companies are willing to let GenAI operate autonomously is another dividing line. Fast followers favor ‘co-pilot’ models, which augment employee efficiency without overhauling existing workflows. Pioneers, however, are pushing the envelope by using GenAI not only as a co-pilot but also for full process automation in certain areas, signifying a more ambitious approach to automation.
- Applicability Across the Organization: The scope of GenAI tool deployment within a company also varies. Fast followers start with tightly defined use cases chosen by leadership, while pioneers adopt a more expansive strategy. They deploy general-purpose tools across the organization, allowing employees to determine the most beneficial applications for LLMs.
- Function: Finally, whether the GenAI use cases are revenue-driving is a key factor. Fast followers lean towards deploying GenAI in lower-risk areas, such as coding co-pilots. In contrast, pioneers are open to leveraging GenAI in high-stakes areas like customer support and sales teams, where the technology can drive significant cost savings or enhance revenue.
Documentation: The importance of meticulously documenting the inputs and outputs of LLM tools cannot be overstated, and it is being employed across companies of the spectrum of risk tolerance. This practice serves two crucial functions: first, it prepares comprehensive audit material for regulatory compliance. Secondly, it’s a cornerstone of effective risk management, ensuring that GenAI is transparent and accountable. This thorough record-keeping is integral to understanding and controlling the impact of GenAI within an organization.
Approvals: Approaches to granting approvals for GenAI projects also exhibit significant variation. Fast followers typically prefer the safety of sandbox environments, where they can experiment and learn in a controlled setting before rolling out GenAI more broadly within their organization. This method allows for a gradual, informed integration of GenAI, minimizing potential risks. In contrast, pioneers adopt a more dynamic approach. They are inclined to approve pilots based on specific use cases, facilitating a more immediate and direct integration of GenAI into their business operations.
Technology.
Crafting a robust technical governance plan is a critical step that many leaders are thinking about. Here’s how businesses are charting their course through this challenging terrain.
- Selecting a Large Language Model Provider: The preference is clear for fast followers: established players like OpenAI, backed by their strong brand and Microsoft’s partnership, present trust and reliability. Conversely, pioneers are casting a wider net. They’re not just relying on OpenAI but exploring open-source options and specialized models. Some are even daring to develop their own LLMs.
- Choosing Partners for Governance Controls: Once the LLM provider is selected, the next challenge is picking the right partners to deploy governance controls. Analysis of over 30 startups in this space reveals that the tools available can be categorized into a few types:
- Audit Tools: These software solutions maintain logs of models’ inputs and outputs for regulatory and monitoring purposes. They help understand how employees use GenAI and ensure compliance with the protocol.
- Block Tools: These tools are designed to prevent the input of restricted information (such as PII) into GenAI systems and to stop LLMs from generating inappropriate outputs.
- Control Tools: Functioning akin to firewalls, these programs restrict certain actions. For example, they can prevent employees without clearance from accessing ChatGPT on company servers.
- Model Monitoring Tools: These tools help leaders assess how well a model performs for specific use cases. This evaluation is crucial in understanding whether a GenAI model aligns with the intended business objectives and delivers the expected outcomes. They also track a model’s performance as time progresses, which is critical to identifying any shifts or ‘model drift’ — a scenario where the model’s accuracy degrades or its predictions become less reliable over time.
Fast followers, prioritizing clarity and structured guidelines, are inclined to wait for larger, established companies to offer more comprehensive governance frameworks. In this regard, services like Microsoft’s Azure and OpenAI are stepping up to meet the demand, which is likely appealing to this group.
On the other hand, pioneers are taking the initiative. Unwilling to wait, they create their governance protocols, often collaborating with innovative startups and consultants. This proactive stance facilitates rapid, bespoke implementation of governance tools tailored to their specific needs in the GenAI landscape.
Conclusion.
As businesses consider the multifaceted aspects of GenAI adoption, taking a step back and evaluating the organization’s unique context is imperative. A company’s pre-existing culture and established control mechanisms will significantly influence the adoption trajectory. Factors such as a company’s position on the innovation curve, dedicated teams for exploring new technologies, and encouragement for employee-led innovation play a pivotal role. Leadership’s stance towards GenAI, particularly from the C-Suite and Board, is critical for effective and secure implementation. This top-down approach is vital in setting the right policies and controls. Lastly, a review of existing governance protocols is essential. How a company has historically integrated new technologies and the clarity of its process for approving and adopting new use cases will significantly influence the path to successful GenAI adoption.

