Building Your AI Governance Blueprint: A Guide to Ethical AI

Dec 15, 2023

AI is at the top of everyone’s mind lately, and for good reason. It’s everywhere! This isn’t just science fiction anymore — it’s happening now.

This is why it’s crucial to think about how your organization will use AI ethically and effectively in the coming days, weeks, months, and even years. A well-thought-out AI governance framework and strategy is critical to using AI to your best advantage.

We find ourselves in a unique position because there are no foundational best practices or years of studies to refer back to. We are the case study, and we have the opportunity to create the best practice approach from our own experiences. It is emerging quickly, and we are all ‘novices’ with the chance to become experts in a new technological landscape. So, with that said, read on for our blueprint to guide your ethical AI implementation process.


As stated in our previous posts, AI is everywhere, and if you’re not on the train, it’s time to get organized. It’s safe to assert that in 2023 and beyond, no legitimate organizational strategy excludes AI or AI considerations. AI is going to impact every area of strategy and organizational performance. The reality of AI is no longer an ‘if’ but a when, and implementing AI into your organization is no longer an ‘if’ but a ‘how.’

It doesn’t matter if you’re in the public sector, private sector, or non-profit space. AI will revolutionize how we work!

AI is already significantly impacting the six key pillars of organizational strategy (big, bold vision, winning strategy, aligned teams, quarterly rhythm, compelling communication, and strong culture.) Strategists and leaders must leverage this as they focus on the adoption of AI in their organizations to accelerate outcomes, create value, improve efficiency, and stimulate growth; it is also vital for leaders to seriously consider how they will use AI to empower their organizational transformation.

Pro Tip:

Our job as strategy leaders is to prompt future thinking and conversation. Rather than asking, “What is AI going to do for us?” instead, ask, “How and when will AI change our organizations, and what does that mean?”

Why Do AI Initiatives Usually Fail?

Unfortunately, while many people may recognize the importance of incorporating AI into their processes and workflows, they may not understand how to implement AI correctly. We can fall into many pitfalls within our AI journey that can be costly. So, what are some common ways AI initiatives fail?

4 Reasons your AI initiative is failing

  1. There is a lack of clarity about the value it brings to your organization.
  2. Algorithms and generative AI are tricky. [And expensive to develop]
  3. AI can create bias.
  4. People don’t trust the product.

Root Causes of AI Failure + 3 solutions

Root Cause: A Lack of Understanding

These outlined reasons come down to a lack of understanding of AI technology. People may view it as a cool trick or gimmick to have in their arsenal but need a deeper understanding of the technology’s function or how it’s made. For example, people love ChatGPT but don’t really understand how it works or what information is training it. Taking the responses it gives you at face value doesn’t account for the bias in the information used to create it; without critical evaluation, those biases will continue to be perpetuated.

Solution: This is why understanding the technology behind artificial intelligence, from how it’s created to how it can best be used within your organization is the best way to combat the common pitfalls. Encourage your team to test and study the AI tools you want to use and provide opportunities for learning and training.

Root Cause: Lack of Strategic Vision

The top part of one of the reasons that AI projects fail, whether you’re using new AI tools or creating your own, is that you are not building something that is making a difference to your customers, to your employees, or your overall organizational goals. AI projects that aren’t aligned with your strategic direction and priorities won’t be adopted or create value for your organization.

Solution: Start with a vision of how AI will transform your organization.

Root Cause: Lack of AI Governance

AI projects that aren’t bound by governance, ethical standards and AI policies will create risk, open it up to bias, and create mistrust of your AI tools/processes. It’s not enough to just introduce the technology and resources to your team and show them how to use it. It’s pivotal to have a clear and defined ethical use of AI in your organization, including how you protect your data, what you will and won’t use AI for, and how you manage your AI tools and data. All of these need to be defined before any AI development projects begin!

Solution: Create clear AI governance and ethics policies. You need to teach your team how to use it responsibly and weave ethical AI implementation into the fabric of your strategy.

Defining the Governance and Ethics of AI in your Organization

We define AI governance and ethical AI as the organization’s ability to direct, manage, and monitor AI activities across its people, systems, and processes. In practice, this is about establishing your distinct AI values as an organization. This can range from guiding principles, AI code of ethics, or specific policies. The purpose is to centralize an organization’s efforts on AI, so we’re avoiding shotgun approaches that end up failing or not bringing value to the organization.

What does ethical AI usage look like?

AI values/guiding principles/code of ethics and specific policies

We recommend creating guiding principles very similar to your organizational strategy’s guiding principles. It’s also imperative to consider what AI means for its impact on organizational strategy.

The people factor of AI

In practice, the second piece of ethical AI considers the people side of the equation. This means creating structure and focus and centers on excellence across the people who are driving, learning, and sharing the adoption of AI within an organization, whether we’re actively engaged in an AI project or not.

Ultimately, it is about ensuring transparency and visibility within the organization so that people know where the organization is going and how they’ll fit into the new direction with the technology.

AI ethics will differ per industry

AI ethics will differ from industry to industry. Journalists, writers, marketers, and those in more creative fields must focus on the originality and accuracy of content produced by AI, as well as overall being aware of potential copyright or plagiarism risks. On the other hand, the healthcare industry may have to focus on entirely different risks, such as sensitive data usage or potential risks of breaches and privacy violations.

It’s up to you and your organization to weigh the risks of AI and determine what guardrails to put in place to ensure those risks don’t come to fruition.

Why does the ethics of artificial intelligence matter?

There are several reasons why AI governance is important to consider within your strategy. First, it allows for accountability and transparency within your organization. It allows everyone to clearly understand who is responsible for the outcomes of AI systems and provides a basis for legal and ethical accountability if something goes wrong. As such, it fosters trust within the organization from the employees to clients and stakeholders.

Additionally, AI systems can inadvertently perpetuate biases present in the data they are trained on. Ethical guidelines emphasize the importance of fairness and equity in AI systems and encourage measures to mitigate bias.

Guiding Principles for AI Usage

You need a clear, responsible approach to AI governance and ethics. Even if you don’t know it, AI is organically creeping into your team’s workflows and processes. While the efficiency gains and transformation with this beneficial technology are real, it also presents a need for clear organization-wide agreements on AI’s responsible and ethical use.

How might you go about this? Try creating at least 3-5 guiding principles that your organization and everyone within it will live by as they implement these new tools into your organizational strategy.

Pro Tip:

Proofs of Concept Groups can help vet systems, identify risk, develop prototypes, and test, test, test!

5 Key Questions to Guide Your AI Ethics Framework

If you’re not sure where to start, here are some key questions and themes to consider for your ethics framework:

  • Data protection: If you are using AI, how are you protecting your data? Your customer’s data? What is your policy on the use of data?
  • Privacy Compliance: How are you compliant with GDPR, CCPA, and HIPAA regulations? How are you preventing legal issues in your organization with this regulation?
  • Security: How will you handle incident responses, systems failures, and breaches when integrating AI into your organization?
  • Bottom-Up Innovation: How will you enable bottom-up innovation and ensure your team uses AI responsibly?
  • Bia:sHow do you ensure fairness and mitigation with AI systems to prevent discriminating against groups, ethnicities, genders, or demographics?

Examples of Guiding Principles from Google:

  • Be socially beneficial, taking into account a broad range of social and economic factors.
  • Avoid creating or reinforcing unfair bias, acknowledging the complexity of distinguishing fair from unfair biases.
  • Be built and tested for safety, with strong safety and security practices to avoid unintended results.
  • Be accountable to people, providing opportunities for feedback, explanations, and appeal.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence, promoting open inquiry and collaboration.
  • Be made available for uses that accord with these principles, limiting potentially harmful or abusive applications.

Pro Tip:

Other common considerations for guiding principles when developing your AI ethics include accountability, data quality and security, equity, and transparency.

A Beginner’s AI Governance and Ethics Framework

Step 1: Define your purpose and scope.

Outline Purpose: Clearly state the objectives of your AI governance and ethics policy.

Scope: Specify the scope, including which AI systems you currently use and applications it covers.

Identify Regulations: Outline any policies you need to comply with – GDPR, CCPA, and AI ethics guidelines.

Stakeholders: Who needs to be involved in this conversation – executive team, VP/managers, AI admins, IT, tool SMEs?

Conduct a Risk Assessment: What are your risks with AI?

Step 2: Set AI guiding principles and policies.

What guardrails will we put in place to guide our decision-making, risk management strategies, mitigation, and approach to using AI responsibly?

What is “off-limits” for AI?

What will we not do – IE, what will AI not do in our organization?

Possible Policies

  • Data – Manage your proprietary and customer data.
  • Impact on Jobs – Stance on displacement.
  • Discrimination – Controls to mitigate bias.
  • Inaccuracy, Plagiarism – Addressing misappropriation.
  • Remediation – Mitigate risks & proactive risk response

Step 3: Establish AI governance structure and standard processes.

Create Ownership and Committees: Who oversees and manages your policy’s implementation, communication, and ongoing regulation?

>Communication: How will you approach communicating these policies and changes in your team?

Reviews: Integrate governance checks/review in your organization’s quarterly strategy review cadence.

Step 4: Roll Out AI tools and company training.

3rd Party Tools: Research, learn, and share the tools that comply with your AI principles.

Build Tools & Systems: Determine areas of internal tool development and the impact of AI. Then, begin integrating your tools and systems.

Employee Training: Train all employees on ethical AI principles and success criteria.

Innovation & Collaboration: Create a structured place for learning and sharing.

Key Takeaways

Ultimately, bringing AI into your strategy is not an “if” decision; it is a “when” decision. It will impact your organization at some point, in one way or another. It is the new reality. The best way to use it effectively is not just as “bolt-on” automation, but as a way to elevate and transform your organization for the future.

Before that transformation can happen, it is essential to have clear AI governance and ethics in place. We must inspire the right behavior and create the structures for teams to learn, share, and adapt organically. So, use this framework. Have a clear strategy and blueprint to move forward with your AI initiatives so that your organization may more readily adapt and evolve as the technology inevitably will.

AI is evolving

The landscape of AI ethics continually evolves alongside technological advances and societal shifts. This dynamic nature demands ongoing evaluation and adaptation to ensure ethical considerations keep pace with developments. It’s vital to recognize that ethical questions surrounding AI are deeply influenced by cultural, legal, and technological advancements, necessitating thoughtful consideration. As AI technologies progress, these ethical questions will continue to evolve and require continuous dialogue and exploration to navigate the complex and ever-changing terrain of AI ethics.

OnStrategy Resources:

Alternative Resources:



What is 8 + 11 ?
Please leave these two fields as-is:
All fields are required.

Join 60,000 other leaders engaged in transforming their organizations.

Subscribe to get the latest agile strategy best practices, free guides, case studies, and videos in your inbox every week.

Keystone bright-path authority-partners iowa caa maw maryland mc kissimmee dot washoe gulf reno