Measurement is critical, but success is defined by non-traditional KPIs.
The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.
Responsible AI is cross-functional, but typically lives in a silo.
Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.
Risk management frameworks are a requirement for all AI, but they aren’t one-size-fits-all.
Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight.
There is power in the AI ecosystem, but you’re only as strong as your weakest partner.
AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers.
Culture is key, but talent is scarce.
Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.
Measurement is critical, but success is defined by non-traditional KPIs.
The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.
While there’s no set way to proceed, it’s important to take a proactive approach to building Responsible AI readiness to overcome or avoid the barriers above.