It is not just organizations based in the EU that need to pay attention. The regulation will apply to any provider that implements or develops AI systems in the EU or whose AI systems produce outputs that are used in the EU’s jurisdiction, so it will affect many organizations based elsewhere. Moreover, the regulation, which is expected to come into force in 2023, is likely to bear similarities to rules currently being drawn up by other government authorities throughout the
world.2
2
European Commission, 2021, “Regulatory Framework Proposal on Artificial Intelligence”; Algorithm Watch, 2021, “European Council and Commission in Agreement to Narrow the Scope of the AI Act.”
Notes:
2
European Commission, 2021, “Regulatory Framework Proposal on Artificial Intelligence”; Algorithm Watch, 2021, “European Council and Commission in Agreement to Narrow the Scope of the AI Act.”
Given the impending heightened focus on new regulations, as well as the potential financial and reputational damage resulting from noncompliance, organizations urgently need to adopt measures that enable them to comply with the requirements of the emerging EU regulation. A comprehensive RAI program, based on BCG’s Responsible AI Leader Blueprint, will allow them to act in accordance with and adapt to the proposed EU AI Act and other regulations that will inevitably follow (such as the Algorithmic Accountability Act of 2022 in the
US).3
3
US Congress, 2022, ”Algorithmic Accountability Act of 2022.”
Notes:
3
US Congress, 2022, ”Algorithmic Accountability Act of 2022.”
An RAI program will also position them to mitigate nonregulatory risks and capture the associated benefits from AI. BCG’s pragmatic and comprehensive framework comprises a number of integrated components: RAI strategy, governance, processes, technology and tools, and culture.
Putting in place a comprehensive program to implement and operationalize RAI throughout an organization takes time, but significant progress can be made with a few basic steps to secure early wins and build confidence in the organization. To position themselves to begin building this framework, organizations should take four key actions: (1) establish responsible AI as a strategic priority with senior-leadership support, (2) set up and empower RAI leadership, (3) foster RAI awareness and culture throughout the organization, and (4) conduct an AI risk assessment.
The Call for Responsible AI Intensifies
The development and adoption of AI have been expanding rapidly, enabling organizations in many industries to transform their capabilities. According to joint research by BCG and the Massachusetts Institute of Technology Sloan Management Review (MIT SMR), global investment in AI reached $58 billion in 2021.
The COVID-19 pandemic contributed greatly to the increased focus on AI, as organizations reacted to the digitization of working arrangements and consumer behavior engendered by the crisis. New applications and developments flourished, accelerating the adoption of AI in sectors such as health care.
While AI has great potential, it also raises concerns related to accountability, transparency, fairness, equity, safety, security, and privacy. An RAI program is one way for organizations to systematically address these challenges. BCG defines RAI as developing and operating artificial intelligence systems that align with organizational values and widely accepted standards of right and wrong, while achieving transformative business impact.
While AI has great potential, it also raises concerns related to accountability, transparency, fairness, equity, safety, security, and privacy.
Organizations that implement RAI programs effectively can derive many benefits. RAI differentiates the brand, strengthens customer acquisition and retention, and improves competitive positioning, thereby leading to higher long-term profitability. It also assists with workforce recruitment and retention, as increasingly socially conscious employees want to work for organizations they can trust and believe in. Moreover, it gives rise to a culture of innovation that can be sustained over time. In general, investing in the development of a mature RAI program leads both to fewer AI failures and to more success in the scaling of AI, which delivers long-term sustainable business value for the whole organization.
Despite the clear potential and burgeoning investment in this field, however, many organizations have struggled to deploy or scale RAI in practice. According to a BCG survey, 85% of organizations with AI solutions have defined RAI principles to shape product development. With few exceptions, though, the good intentions have yet to be translated into rigorous practical outcomes. Only a small fraction (20%) of organizations have fully implemented these principles. Those that fall behind face significant long-term risks.
The widespread failure to implement RAI limits the overall impact of AI, preventing organizations from capturing its full business potential and thus limiting their capacity for growth. Moreover, major risks for organizations, customers, and society at large go unaddressed.
AI is likely to be one of the global developments with the greatest impact during the coming decades. Consequently, there is a widespread and growing expectation within society that AI products should be built in a responsible and ethical way. When the EU’s General Data Protection Regulation (GDPR) took effect in 2018, it generated an awareness about privacy requirements throughout Europe and across the world. Soon, consumers began to demand an ecosystem-wide change with stronger privacy protections in consumer products and services. This development laid the foundations for RAI through requirements such as the right to explanation.
In the face of such expectations, several instances of troubling consequences of AI use have hit the headlines in recent years, posing reputational risk for the organizations involved. For example, certain algorithms have resulted in discriminatory practices owing to an underlying prejudice in the input data.
In one case, an algorithm used by Amazon as a recruitment tool was shown to be biased against women because the AI system was observing data submitted by mostly male applicants over a ten-year period; as a result, the algorithm taught itself that male applicants were preferable. When these issues were discovered, Amazon stopped using the AI solution, according to a 2018 report by Reuters.
In a case described by Axios in 2020, students in the UK were unable to take the usual A-level examinations because of the pandemic, so an algorithm was used to award scores instead. The algorithm was found to be biased toward students from wealthier schools, and the results had to be scrapped.
The Advent of Comprehensive AI Regulation
To date, more than 60 countries as well as some international organizations have approved voluntary principles and standards to guide AI usage and development. Two prominent ones are the OECD Principles on Artificial Intelligence and the Draft Risk Management Framework by the National Institute of Standards and Technology in the US. (See Exhibit 2.) Formal laws have not yet been forthcoming, however. Widespread government focus is now trying to rectify this situation, driven at least in part by the highly publicized lapses of AI systems and the harms they can create for citizens and society.