Thursday, 13 June 2024

How to create an AI policy framework for your business

Discover how to create an effective AI policy framework mitigating risks and maximising opportunities in the era of generative AI adoption.

A clear and effective AI policy framework is essential for all organisations using – or thinking about using – AI and generative AI tools. There are many aspects such a framework must cover. In this blog, we’ll take a look at how to approach the development of an AI policy framework for your organisation.

In an August 2023 survey, McKinsey found that generative AI was already on their board’s agendas. More than 40 percent of companies were planning to increase their investment in AI to include generative AI technologies. Since then, we’ve had the launch of Microsoft Copilot and new developments in AI tools from Adobe, Google and AWS – further boosting AI adoption.

 

The need to manage the risks associated with AI

Yet McKinsey has also found that organisations aren’t preparing well enough to manage the associated risks. Fewer than half of its survey respondents said their organisation is mitigating even the most relevant risk: inaccuracy.

We know that AI can “hallucinate”. Inaccuracies are even harder to identify. And the risks don’t stop there. AI frequently runs into issues around bias, copyright infringement, information governance, cyber security and a lack of transparency.

In order to identify, be transparent about, manage and mitigate these risks, every organisation considering or using AI tools must develop an effective AI policy framework.

 

The starting point for your AI policy development

When it comes to AI regulation, the big tech companies have continually argued that a one-size-fits-all approach could unintentionally stifle innovation and slow down the adoption of AI. They have lobbied for any regulation to be tailored to specific contexts and uses. This, of course, would make policy much harder to create, slow down regulation and make it much more likely for it to be retrospective as governments and regulators wait for use cases to emerge. Nevertheless, their arguments have enjoyed a certain amount of traction.

It is the EU that has led in its regulatory approach to AI. The EU AI Act is the first comprehensive legal framework on AI in the world. It aims to foster trustworthy AI in the EU and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles and to address the risks of very powerful models.

This regulation details strict obligations, including:

•    Adequate risk assessment and mitigation systems

•    High-quality datasets feeding into the system to minimise risk and discriminatory outcomes

•    Logging of activity to ensure traceability of results

•    Detailed documentation on the system and its purpose

•    Appropriate human oversight measures to minimise risk

•    A high level of robustness, security and accuracy.

While the UK is outside the scope of this regulation, it offers a good starting point for understanding the risks associated with AI use and how to mitigate them. 

The UK Government favours a more regulator-led approach to implementing AI regulation. It deems this a more pro-innovation approach, focused on five principles:

•    Safety, security and robustness

•    Appropriate transparency and explainability

•    Fairness

•    Accountability and governance

•    Contestability and redress

There is some overlap here, of course. However, the focus on contestability and redress adds another layer of policy development which your AI policy framework should address.

 

How to develop your own AI policy framework

The starting point for any policy framework project is to establish a steering group of board members and relevant stakeholders to lead the policy development process. 

In the context of AI policy framework development, this should include data governance, legal, cyber security, HR and user representation. Board members, steering group members and other stakeholders should be educated in essential AI concepts, including cyber security, algorithmic bias and data privacy concerns.

Once you have established a steering group, you can outline the policy objectives which will shape the development of your policy framework. 

At the very least, your policy framework should cover:

•    Ethical principles, e.g. fairness, transparency, accountability, human wellbeing.

•    Legal and regulatory compliance, including data protection and privacy laws as well as industry-specific regulation. 

•    Identification of risk, e.g. bias, cyber security, data security, copyright infringement, hallucination, inaccuracy, etc.

•    Use cases, with associated risk analysis, guidelines and best practices.

•    Clear documentation to aid transparency and responsible practices.

•    Accountability and governance, with clearly defined roles and responsibilities and governance mechanisms which cover the entire AI lifecycle.

•    Monitoring and evaluation, regularly scheduled, including “human in the loop” processes, performance reviews, assessment of adherence to ethical standards.

•    The establishment of clear mechanisms for reporting and feedback, both from inside and outside the organisation.

•    Integration of your AI policy framework with the other policy frameworks and processes in your business, e.g. around ISO or data governance processes.

•    Internal communication of key concepts, use cases, risk management and reporting tools, support and responsibilities.

•    External communication to customers and other stakeholders to explain how you are using AI, how it affects your decision making and the service they receive, how you are protecting their data and the rights and routes for contestability and redress.

•    How you will regularly update your AI policy framework to accommodate user feedback, performance reviews, technological developments, regulatory and legal changes, changing cyber-security threats and more.

You can lean into existing guidance when developing this framework and the support of your IT service delivery partners. For example:

•    ISO 42001 AI Management System Standard, which details how to continuously improve and iterate responsible processes.

•    ISO 22989, which establishes terminology and key concepts in the field of AI. 

•    Microsoft’s guidance around developing responsible AI practices focuses on the principles of accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness. 

•    The UK Information Commissioner’s Office (ICO) has published guidance on AI explainability which are useful to consider.

•    The UK National Cyber Security Centre (NCSC) has also published guidance on understanding the risks – and benefits – of using AI tools.

There are plenty of resources available to draw on. There is no need to start from a blank slate. The important thing to do is to get started. Every organisation needs to develop and communicate this type of policy framework as a matter of urgency in order to mitigate the risks of AI. 

The potential for “shadow AI” to disrupt and injure your organisation is so much greater than the “shadow IT” risk we grappled with during the early adoption of smartphones, when policy also then lagged behind adoption.

However, the opportunities are also huge. Just as the corporate world was redrawn during the “dot com” Internet boom of the early 2000s, the impact of AI and, especially, generative AI is likely to redraw the corporate map, with new winners and old losers. The ability to innovate and adapt effectively and will depend on how ably and responsibly you adopt these new technologies. Developing an effective AI policy framework will be the groundwork for any success.

 

What now?

Is your business ready to explore the new possibilities created by gen AI? Would you like support putting the necessary governance, IT and data infrastructure and AI tools in place?

The Grant McGregor team can assist.

Call us: 0808 164 4142

Message us: https://www.grantmcgregor.co.uk/contact-us

Further reading

You can find additional technology, cyber-security and AI topics on our blog:

•    Generative AI: Are we at a new generational moment in tech?

•    AI’s new role in cyber security

•    How to pick the right IT support company for your business

•    What is Microsoft Copilot? And is it something my business needs?

•    The ethics of AI and the problem of bias