" "

AI is inherently about learning. That’s what large language models such as ChatGPT and Bard do, after all: they scan data sets to identify patterns and continuously improve their capabilities.

It’s no surprise, then, that education is an area where some of AI’s greatest potential lies—as well as some of our most profound questions about the technology. What will become of traditional education now that generative AI can answer almost any question in a matter of seconds? Will certain skills that once felt essential to the development of human knowledge become irrelevant? And as the range of advanced tasks AI systems can perform rapidly expands, what do we as humans have to offer?

The uncertainties abound. How, then, can we assuage our fears about LLM-assisted learning, expand employment opportunities in an AI-powered world, and ensure responsible AI implementation? More learning.

'

A Potential Boon for the Classroom

Concerns over generative AI in education often focus on the potential for cheating—how easy it would be, say, for a student to ask ChatGPT to write a ten-page essay on Hamlet. But however well founded those concerns, they overlook the enormous potential GenAI holds for improving learning at all levels. GenAI-powered tutoring chatbots can hyperpersonalize lessons for each student, adjusting the format, degree of difficulty, and pace of instruction based on the individual and offering on-demand tutoring and feedback. This can be particularly useful in reaching students who fall outside the mean in terms of learning progress, or who might not respond to traditional teaching methods.

Teachers, meanwhile, can use GenAI to enhance and accelerate the course-planning process—developing a syllabus, curating a list of relevant resources, and writing detailed lesson plans with near-instant speed. GenAI can also help teachers generate performance reports for students across multiple classes and prepare personalized recommendations on next steps. And it can assist with nonteaching activities, such as creating schedules, coordinating with colleagues, or composing emails to parents—freeing up time to focus on student engagement and support.

Colleges can use GenAI to hyperpersonalize their recruitment strategies; a prospective student with an interest in sustainability programs, for example, could receive personalized outreach content that highlights the college’s environmental science programs, campus recycling and composting activities, and related advocacy groups. Such outreach can continue throughout a student’s journey, with GenAI-powered chatbots guiding them through the registration process, offering course recommendations, and—once coursework begins—identifying at-risk students for early intervention.

From K-12 to graduate school, educational institutions will need to change their approach to assessing student progress. Research and information synthesis will give way to critical thinking and practical problem solving; creativity, leadership, ethical decision making, and new ways of interpreting data will take precedence. These higher-order skills—combined with an ongoing emphasis on responsible use of AI tools—will help prepare students for a workplace undergoing its own GenAI revolution.

A Lifetime of Learning

In the age of generative AI, learning won’t end with graduation. According to the WEF’s 2023 Future of Jobs report, 23% of current jobs are expected to change by 2027, with 69 million new jobs created and 83 million eliminated; meanwhile, 44% of workers’ core skills are expected to change in the next five years. And 81% of organizations surveyed plan on investing in learning and on-the-job training programs. It’s no wonder that “curiosity and lifelong learning” and “resilience, flexibility, and agility” are among the top ten most important skills organizations will be looking for in the coming years, according to the WEF.

Employees won’t merely need to be upskilled; they’ll need to be reskilled, and on a massive scale. Some employees will have to change their occupations entirely. As a recent report from the Digital Data Design Institute at Harvard’s Digital Reskilling Lab and the BCG Henderson Institute shows, reskilling is a strategic imperative for companies—both as a part of their employee value proposition and as a means of accessing the talent necessary to maintain competitive advantage.

Personalized Learning at Scale

With the coming of generative AI, employee upskilling and reskilling programs have become a strategic imperative. Fortunately, GenAI—for all the disruption it creates—can also be a powerful tool for organizational learning.

BCG U has created a prototype generative-learning platform, called BCGenie, that uses generative AI models to deliver a personalized, adaptive learning experience at scale. Integrating existing technologies, proprietary LLMs, and client-specific content, BCGenie can help organizations maximize ROI in upskilling and reskilling programs by providing employees with self-paced learning and hyperpersonalized, up-to-date content—all on demand and with materials tailored to an individual’s preferred style of learning.

To learn more about BCGenie, and about BCG U’s organizational and employee development programs, click here.

Learning when to use GenAI is just as essential as learning how to use it. In a recent experiment conducted in partnership with Harvard Business School, MIT Sloan School of Management, the Wharton School at the University of Pennsylvania, and the University of Warwick, BCG found that the use of GenAI improved performance by 18% on tasks for which the technology was well-suited; for tasks it had not yet mastered, however, performance declined by an average of 23%.

And while nearly all subjects saw a boost in performance when using GenAI for creative ideation, our experiment found that the relatively uniform output of the technology can reduce a group’s diversity of thought by 41%. What’s more, roughly 70% of survey participants expressed a concern that, over time, extensive use of GenAI may negatively impact their creative abilities. Organizations will need to be mindful of GenAI’s tendency to produce homogenous results, as well as the potential effects of technology on their employees’ professional identities.

Learn More About GenAI
Learn More About GenAI
BCG-GenAI-website_homepage.jpg
生成AI
生成AIは、深層学習とコンテンツを生成するためのGANを活用した、AIの一形態です。生成AIがどのようにビジネスをディスラプトしたり恩恵をもたらしたりするかご覧ください。
AI Hero Video
AI
AIの拡大展開によりきわめて大きな競争優位性を築ける可能性があります。BCGのAIを軸とした支援がクライアントの価値創出にどのように役立っているかをご覧ください。

No Shortage of Other Risks

As students, educational institutions, workers, and organizations learn how to use GenAI and other AI tools, they’ll need to keep a host of additional challenges in mind. Among them:

Plagiarism isn’t the only peril. We will, of course, need to build new tools to help teachers identify GenAI-generated content and ensure that proper sources are cited; at the same time, teachers themselves will need to determine which uses of GenAI are permissible and which constitute cheating. But GenAI presents other educational risks, too. Its powerful analytic capabilities could unintentionally create a “predictive ceiling” for some students, limiting their upward trajectory. And on a larger scale, access remains a challenge: schools with greater resources are more likely adopt GenAI tools quickly, which could widen the gap with schools that face funding shortages.

Children are especially vulnerable. Just as AI holds extraordinary potential for advancing education, it presents an equally profound risk of compounding the dangers children already face online, including bullying, inappropriate content, and digital addiction. Companies must build safety and privacy protections into any AI-powered software intended for use by children.

Bias is an ever-present concern. AI systems are only as reliable as the data on which they are trained; AI outputs could therefore potentially reflect the biases inherent in the underlying data. Some facial recognition programs, for example, have had difficulty recognizing the faces of people of certain ethnic backgrounds because the data used to train the AI omitted any images of people of those ethnicities. As GenAI systems become more prevalent and powerful, humans will need to be taught to recognize such biases and correct them.

Machines are not infallible. GenAI has shown a well-publicized tendency to produce misinformation. Even so, humans may be reluctant to question the outputs GenAI and other AI systems produce, especially as the technology grows more sophisticated. Regardless of how impressive these systems become, humans must remain in the loop, ensuring that all AI tools are deployed responsibly and that their outputs are accurate.



As we continue to work with and learn from AI, new risks will surely emerge—as will new questions about what the technology means for the future of human intelligence. We’ll need to keep asking those questions, and applying our empathy and judgment as we pursue the answers. If we do so, AI’s expanding capabilities can create new opportunities to expand our own knowledge and creativity.

Featured Insights: BCG’s most inspiring thought leadership on issues shaping the future of business and society.