Partner & Associate Director
Berlin
The era of AI regulation is upon us. Executives often equate regulation with constraint, and for good reason. Noncompliance can result in significant consequences—see Meta’s $1.3 billion fine for violating EU–US data transfer rules. But the reality is much more nuanced. AI regulation is recognized as an increasingly urgent need not just among government officials but also among industry leaders and even some LLM developers—all of whom have expressed concern about the evolution of generative AI and the safety of future AI tools.
Regulators around the world are already hard at work. The EU AI Act is entering its final stage of negotiation; many believe this could become the global standard for AI regulation, much like GDPR has for data protection. In the US, the Federal Trade Commission has pledged to enforce the core principles of fairness, equality, and justice, and Chinese regulators have submitted a proposal to manage generative AI chatbots.
At this pivotal moment, companies pursuing AI transformation must have a deep understanding of current and emerging regulations, so they can ensure full compliance and engage productively with regulators through data-driven dialogue. Such collaboration is an opportunity to develop regulations that are mutually beneficial and technologically feasible—that provide effective safeguards while leaving room for companies to experiment and innovate.
Executives can start with four actions: creating a unified regulatory framework, managing contradictory regulations among countries, participating in sandboxes and incentive programs, and developing and sharing internal expertise with regulators.
The EU currently leads the world in the breadth and depth of digital and data regulation, creating complexities for companies adopting AI. The EU AI Act won’t exist in a vacuum—it will become part of a portfolio of regulations that includes the forthcoming Data Act, the Data Governance Act, the Digital Services Act, and the Digital Markets Act, among many others. These laws overlap and reference one another.
To approach this vast bureaucracy and benefit from the significant opportunities this body of legislation provides, companies need to look holistically at the EU regulatory frameworks, identify commonalities and (compatible) differences, and then create an overarching framework they can follow. In other words, they need to think about the frameworks as an integrated stack with a unified list of processes and governance structures that cover all regulations. When a company wants to roll out a new AI-driven product or service, it can review this list using automation, where possible, instead of manually evaluating thousands of articles.
Consider an EU-based bank that wants to build a generative AI solution to help relationship managers or financial advisors minimize the time spent on data entry. The AI solution could generate a relationship summary before each interaction and real-time next-best actions and talking points based on the customer’s situation. The first and most obvious regulatory framework involved here is GDPR, which requires strict protection of both the bank agent’s and the customer’s personal information. If the data for this application is stored on the cloud, the bank will also need to comply with all data transfer laws.
The EU AI Act could take governance of this solution a step further. It would likely be classified as a “high risk” use case, given its potential for bias and discrimination in advising on clients’ creditworthiness. As a result, the bank would need to conduct a conformity assessment, meet a list of requirements to ensure the system is safe, and provide implementation information to a public database for transparency. Most important, the bank would need to receive a certification from the EU before launching the technology.
Creating an overarching framework will require a significant upfront investment but will result in considerable time and cost savings in the long term. Besides managing existing regulations more effectively, companies will be prepared for future developments. When a new regulation or technology is released, companies would match the updated requirements with the existing unified framework—determining how it can be integrated to ensure compliance.
Global companies often have an additional challenge: they need to monitor and comply with regulations in every country where they operate. For example, Italy famously banned ChatGPT on the grounds of GDPR, even though other GDPR-regulated countries did not follow suit—leaving companies to develop bespoke policies.
Moreover, regulations from different regions often present conflicting or incompatible requirements. Consider a regulatory framework in one region that requires companies to store customer data for a specified period before being deleted, while another region requires companies to enable customers to delete their data at any time. Compliance with one will mean noncompliance with the other; this can get complicated quickly for companies with a global footprint.
These contradictions can create a variety of legal and ethical challenges. For example, the EU Digital Services Act requires strict moderation of online content and places liability for shared content with online platform companies. In contrast, Section 230 of the US Communications Decency Act provides platform companies with broad immunity from liability, allowing them to moderate content freely.
To manage this complexity, companies should add branches to their unified framework focused on the areas where regional regulations conflict. However, because even regional laws can have a global jurisdiction, all branches should be managed through a centralized committee under a Chief AI Ethics Officer. For example, China’s Personal Information Protection Law applies to any company handling personal data related to selling goods or services to people in China, no matter where the company is located.
Regulators in the UK, Norway, and France, among other countries, have been piloting programs focused on regulatory sandboxes to promote innovation within their data protection policies (such as GDPR). Within these government-established digital spaces, companies can experiment in a “safe” environment and gain access to unique data.
Similar to such data security sandboxes, the EU AI Act will create regulatory sandboxes for AI to enable businesses to learn about new regulations and how they can adopt them. Companies in the process of AI transformation should start exploring these opportunities now to boost innovation and ensure compliance.
Companies can also experiment, innovate, and stimulate growth by collaborating with governments through incentive programs. For example, a consortium of banks in India worked with the government to create a standardized approach to Unified Payments Interface. The government provided financial incentives to the banks, such as reduced transaction fees, and the central bank established frameworks that promoted interoperability. As a result of the vibrant tech ecosystem that developed, India now accounts for nearly 40% of all digital payment transactions in the world, and global fintechs have adopted India’s platform to increase their reach.
In the EU, regulators amended the Payment Services Directive to enhance its security measures and open its payments market to third-party payment service providers. With the addition of these new companies, innovation increased across the payments ecosystem—which drove double-digit growth rates.
In a recent BCG survey of 600 industry incumbents in six countries, 61% reported a lack of necessary regulation on AI. Many incumbents have invested significantly in researching ethical AI guardrails on their own and have set up review boards to ensure their AI products and strategies follow ethical AI principles.
Moving forward, companies should accelerate this work—and share their technical expertise, practical insights, and risk assessments with regulators. AI is a technologically complex and rapidly evolving field. Companies developing and implementing AI solutions have a unique insight into the nuances and limitations of the technology. They also have a practical understanding of the costs and benefits of various implementation strategies. This expertise is invaluable for regulators as they try to strike a balance between promoting innovation and addressing societal concerns.
For example, when UK regulators launched a white paper outlining principles to guide the innovative and safe use of AI in March 2023, they invited companies, individual users, and academics to share their perspectives on the paper through a government-hosted platform. Regulators plan to consider and incorporate this feedback as they roll out tools and resources, such as risk assessment templates, in the coming months. The goal? To implement a balanced, future-proof, and pro-innovation regulatory framework.
Executives should consider new and upcoming AI policies as a forcing function. The evolving environment can prompt companies to create a more unified, cohesive approach to regulatory frameworks, to experiment and innovate in sandboxes, and to engage in a proactive and productive dialogue with regulators to ensure that regulations are well matched to the technology they oversee. For CEOs who choose to embrace it, today’s flurry of regulatory activity represents a substantial opportunity.
BCGの戦略シンクタンクとして、アイデア創出に有効なテクノロジーを活用し、ビジネス、テクノロジー、科学分野からの新しい価値あるインサイトを探求・開発しています。ビジネスリーダーを巻き込んで、ビジネスの理論と実践の境界線を広げ、ビジネス内外から革新的アイデアを取り入れるための刺激的なディスカッションや実験を行っています。2022年7月に日本における拠点であるBHI Japanを設立しました。
Related Content
Read more insights from BCG’s teams of experts.
For AI startups, success runs through incumbents, but overcoming incumbent reluctance to partner with startups requires more than the best tech.
Large language model-powered virtual assistants are about to get between traditional companies and their customers, forcing executives to make tough choices sooner than expected.
Generative AI technology will prompt business model innovation. Francois Candelon explains how to prepare.