Generative AI
Generative artificial intelligence is a form of AI that uses deep learning and GANs for content creation. Learn how it can disrupt or benefit businesses.
Concerns about the potential risks of artificial intelligence have been discussed for years. But recent articles about generative AI tools like ChatGPT have set off a corporate firestorm in a matter of weeks. What’s changed?
Jeanne: The huge change is that tools like ChatGPT are democratizing AI. This represents a really fundamental shift in how these tools are used and the impact they can have. Before, AI was generally created by a highly skilled team of people who you hired to build proprietary models using large-scale computing power and huge data sets. Or it may have been something you bought from a very specialized vendor. AI felt more controlled and was applied more in reaction to a known challenge or opportunity. It was used for very specific, narrowly defined applications. Now, you have proactive AI—machine learning that can create original content. And the tools are available to everyone. For a CEO, this can be incredibly exciting. Generative AI has the potential to dramatically accelerate innovation and completely change work by eliminating many of the rote, tedious tasks people do every day.
But it can also be terrifying.
How so?
Jeanne: There are all the risks and misuses we’re already familiar with—bias, violation of privacy, misinformation, impersonation, and theft of intellectual property, to name just a few. But now more executives are becoming aware of the risks within their organization as employees experiment with generative AI. We’ve heard of people taking very confidential company information, uploading it into an external site like ChatGPT, and producing a PowerPoint deck. Somebody could upload notes of the conversation we’re having now into an open space and spit out an article before we’re finished. Or they could be relying on information an AI bot is convincingly presenting as fact that could lead them to make bad decisions or do harm.
Tad: I can be even sharper. Shadow AI—development that is happening around the organization that you may not know about—was always a challenge. But you generally didn’t worry about it happening in ten minutes. AI typically was built along a predictable path, similar in some ways to how software development works. CEOs hopefully knew about it because it was a serious investment.
Now all the formidable barriers to development have tumbled. You no longer need specialized talent. You no longer need proprietary data, computing power, and a complex interface. I’d argue, with these new publicly available tools, it’s likely no longer even possible to know or catalog everything that is going on with AI experimentation and development across your entire organization.
How are corporate leaders reacting?
Tad: Right now, executives are having their eyes opened. They’re seeing articles about what generative AI could possibly do and saying, “I need to learn much more.” So they’re setting up task forces to understand both the immense potential of AI and how the threats may apply to them. But I don’t think many understand the depth of this potential revolution, how fast it is moving, or the implications. To be honest, I’m not sure anyone does yet. There is a lot of rampant speculation that may be either too ambitious or not ambitious enough.
How well prepared are companies to mitigate these risks? Don’t most already have responsible AI (RAI) programs?
Jeanne: It varies. The small subset of companies for which AI is core to their offering—say an online marketplace that provides very personalized suggestions—is further along. Their executives understand AI’s power, since it’s central to their business. So they are hopefully well grounded in ethical AI. Then there’s everybody else. If they use AI at all, it’s for very specific use cases. So they’re less familiar with RAI—and the risks are much higher. In a recent survey we conducted with MIT Sloan Management Review, more than 80% of global respondents agreed RAI should be a top management priority. But only around half have actually put some program in place. And fewer than 20% said they have a fully implemented RAI program.
Tad: My guess is that even that 20% has done this only for their known AI. Those with a handle on all the AI underway inside their companies are probably fewer, especially as you define AI more broadly. We recently spoke with the head of enterprise risk for a client. He felt very comfortable that they had good RAI around their corporate-sponsored lighthouse AI development projects. What worries him is all the AI he doesn’t know about. With ChatGPT and generative AI, that kind of activity has just exploded.
What’s required for a successful RAI program?
Jeanne: First off, you don’t need to start erecting new walls. The whole point of RAI is to harness the power of AI without causing harm or leading to unintended consequences. A lot of this is about using existing risk management tools and applying them to new technologies. It starts with being very clear on your basic ethical principles and defining guardrails. For instance, your company may have a “no fly zone” regarding using AI that falls outside core corporate values. Next, you need the right governance. Have a person at an executive level whose full-time job is to ensure that responsible AI principles are being applied as you deploy these capabilities. This person needs to be accountable, visible, and properly resourced—not somebody five levels down who’s doing this as a side gig. You also need lots of education so that people throughout your organization understand the guardrails and the reasons for them when using AI. Then you need the right tools and processes for the nitty-gritty of preventing risk, such as code libraries, testing, and quality control, that ensures your AI is working as intended.
How has generative AI changed the way companies should approach responsible AI?
Tad: Until recently, many companies probably focused heavily on AI governance. They counted on their risk and legal organizations to catch programs in development before they unveiled something to the world that might cause damage. With generative AI, that won’t be enough. You need responsible AI on steroids. RAI needs to be built into the culture and fabric of your organization, from the low-level staffer who just left a meeting with notes to summarize to the head of R&D who is trying to use AI to revolutionize product development. You also have to move fast. In this space, that’s now measured in weeks. If you haven’t already sent a message out to all your employees on the appropriate use of third-party generative AI services, for example, you’re very late and at risk. And if you don’t have any programs at all to ensure that AI is used responsibly, you’re really in trouble.
Aren’t governments working on regulations that will soon address all this?
Jeanne: Legislation is coming at the regional, national, and even city level. But rather than wait for legislation, companies should get ahead of it. Even if your company has good values and principles—but does nothing—you’re likely to encounter these ethical issues when you use AI. The greater attention paid to purpose and ESG also makes it hard to ignore responsible AI. And if industry can implement RAI and self-regulate in certain ways, it can take some pressure away from really draconian regulation.
Tad: Jeanne’s right. There’s a risk of massive backlash. If AI is rolled out irresponsibly and results in all these negative use cases—or just failures of quality control—legislatures will react. They could take this amazing tool for innovation and shut it down. That’s why it’s so important to get RAI right quickly. This is a time for industry to lead. Indeed, it has an ethical imperative to do so.
ABOUT BOSTON CONSULTING GROUP
Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we work closely with clients to embrace a transformational approach aimed at benefiting all stakeholders—empowering organizations to grow, build sustainable competitive advantage, and drive positive societal impact.
Our diverse, global teams bring deep industry and functional expertise and a range of perspectives that question the status quo and spark change. BCG delivers solutions through leading-edge management consulting, technology and design, and corporate and digital ventures. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, fueled by the goal of helping our clients thrive and enabling them to make the world a better place.
© Boston Consulting Group 2024. All rights reserved.
For information or permission to reprint, please contact BCG at permissions@bcg.com. To find the latest BCG content and register to receive e-alerts on this topic or others, please visit bcg.com. Follow Boston Consulting Group on Facebook and X (formerly Twitter).
Read More
Read more insights from BCG’s teams of experts.
Generative artificial intelligence is a form of AI that uses deep learning and GANs for content creation. Learn how it can disrupt or benefit businesses.
Our BCG responsible AI consulting team helps organizations execute an strategic approach to responsible AI through a tailored program based on five pillars.
Get a jump on new requirements, including the upcoming European Union (EU) AI Act, by adopting BCG’s Responsible AI Leader Blueprint.