BOSTON—The artificial intelligence (AI) landscape has changed dramatically over the past year with the swift adoption of generative AI (GenAI), making it more difficult for organizations to be responsible with the technology and putting pressure on Responsible AI (RAI) programs to keep up with continuous advances. While more than half (53%) of organizations rely exclusively on third-party AI tools, having no internally designed or developed AI of their own, 55% of all AI-related failures stem from third-party AI tools, according to new research by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG).
The report, titled “Building Robust RAI Programs as Third-Party AI Tools Proliferate,” is based on a global survey of 1,240 respondents, representing organizations reporting at least $100 million in annual revenues, across 59 industries and 87 countries.
“The AI landscape, both from a technological and regulatory perspective, has changed so dramatically since we published our report last year,” says Elizabeth M. Renieris, MIT SMR guest editor and coauthor of the report. “In fact, with the sudden and rapid adoption of generative AI tools, AI has become dinner table conversation. And yet, many of the fundamentals remain the same. This year, our research reaffirms the urgent need for organizations to be responsible by investing in and scaling their RAI programs to address growing uses and risks of AI.”
Both Leaders and Non-Leaders Need to Step Up
RAI Leaders have increased from 16% of our survey sample to 29% year over year.1 Despite this progress, 71% of organizations are Non-Leaders. With significant risks emerging from third-party AI tools, it’s time for most organizations to double down on their RAI efforts.
Widespread Reliance on Third-Party AI
The vast majority (78%) of organizations surveyed are highly reliant on third-party AI, exposing them to a host of risks, including reputational damage, the loss of customer trust, financial loss, regulatory penalties, compliance challenges, and litigation. Still, one fifth of organizations that use third-party AI tools fail to evaluate their risks at all.
Employing a wide variety of approaches and methods to evaluate third-party tools is an effective strategy for mitigating risk. Organizations that employ seven different methods are more than twice as likely to uncover lapses as those that only use three (51% vs. 24%). These approaches include contractual language mandating adherence to RAI principles, vendor pre-certification and audits, internal product-level reviews, and adherence to relevant regulatory requirements and industry standards.
A Rapidly Evolving Regulatory Landscape
The regulatory landscape is evolving almost as rapidly as AI itself, with many new AI-specific regulations taking effect on a rolling basis. About half (51%) the organizations surveyed report being subject to non-AI-specific regulations that nevertheless apply to their use of AI, including a high proportion of organizations in the financial services, insurance, healthcare, and public sectors. Organizations subject to such regulations account for 13% more RAI Leaders than organizations not subject to them. They also report detecting fewer AI failures than do their counterparts that are not subject to the same regulatory pressures (32% vs. 38%).
CEO Engagement Is Key in Affirming an Organization’s Commitment to RAI
CEOs play a key role in both affirming an organization’s commitment to AI and sustaining the necessary investments in it. Organizations with a CEO who takes a hands-on role in RAI efforts (such as by engaging in RAI-related hiring decisions or product-level discussions or setting performance targets tied to RAI) report 58% more business benefits than do organizations with a less hands-on CEO, regardless of their leader status. Furthermore, organizations with a CEO who is directly involved in RAI are more likely to invest in RAI than are organizations with a hands-off CEO (39% vs. 22%).
Five Recommendations for a Dramatically Changing AI Landscape
The report outlines five recommendations for organizations as they navigate the rapid adoption of AI and the inherent risks associated with it:
“Now is the time for organizations to double down and invest in a robust RAI program,” says Steven Mills, chief AI ethics officer at BCG and coauthor of the report. “While it may feel as though the technology is outpacing your RAI program’s capabilities, the solution is to increase your commitment to RAI, not pull back. Organizations need to put leadership and resources behind their efforts to deliver business value and manage the risks.”
Download the publication here.
Media Contacts:
Eric Gregoire:
+1 617 850 3783
gregoire.eric@bcg.com
Tess Woods:
+1 617 942 0336
Tess@TessWoodsPR.com
At MIT Sloan Management Review (MIT SMR), we explore how leadership and management are transforming in a disruptive world. We help thoughtful leaders capture the exciting opportunities—and face down the challenges—created as technological, societal, and environmental forces reshape how organizations operate, compete, and create value.
MIT SMR’s Big Ideas Initiatives develop innovative, original research on the issues transforming our fast-changing business environment. We conduct global surveys and in-depth interviews with frontline leaders working at a range of companies, from Silicon Valley startups to multinational organizations, to deepen our understanding of changing paradigms and their influence on how people work and lead.
BCGは、ビジネスや社会のリーダーとともに戦略課題の解決や成長機会の実現に取り組んでいます。BCGは1963年に戦略コンサルティングのパイオニアとして創設されました。今日私たちは、クライアントとの緊密な協働を通じてすべてのステークホルダーに利益をもたらすことをめざす変革アプローチにより、組織力の向上、持続的な競争優位性構築、社会への貢献を後押ししています。
BCGのグローバルで多様性に富むチームは、産業や経営トピックに関する深い専門知識と、現状を問い直し企業変革を促進するためのさまざまな洞察を基にクライアントを支援しています。最先端のマネジメントコンサルティング、テクノロジーとデザイン、デジタルベンチャーなどの機能によりソリューションを提供します。経営トップから現場に至るまで、BCGならではの協働を通じ、組織に大きなインパクトを生み出すとともにより良き社会をつくるお手伝いをしています。
日本では、1966年に世界第2の拠点として東京に、2003年に名古屋、2020年に大阪、京都、2022年には福岡にオフィスを設立しました。