Managing Director & Senior Partner, Director of the BCG Henderson Institute
San Francisco - Bay Area
Related Expertise: デジタル/テクノロジー/データ, グローバルビジネス, AI
Kai-Fu Lee is the founder and CEO of Sinovation Ventures, a Chinese technology venture investment firm. He was named one of Time magazine’s 100 most influential people in the world in 2013. Before founding Sinovation Ventures, he was president of Google China and previously held executive positions at Microsoft, SGI, and Apple.
While in Vancouver attending the TED conference, Lee sat down with Martin Reeves, director of the BCG Henderson Institute, to talk about the impact of artificial intelligence on companies, industries, and nations. Drawing from his new book AI Superpowers: China, Silicon Valley, and the New World Order—which will be released in September 2018—he discussed the case for the regulation of AI applications, how AI affects company and national competitiveness, and how CEOs might be underestimating the effect of AI on the future of work. A transcript of the conversation follows.
We hear all sorts of extreme predictions about the possibilities for AI. How do you think AI will reshape companies? How will it change what a company is and does?
AI is one of those technologies that's applicable everywhere. For most companies that have data, it can immediately bring cost savings or ways to create efficiency and make more money. A bank can use it to improve customer targeting, acquisition, conversion, lower the default rate, and improve credit fraud. The applications are endless. The requirement is that the company has to have a lot of data. That's the “big data AI” that is currently pervasive.
Going forward, I think the proliferation of sensors will become an incredibly important source for new data that didn't exist before. This, in turn, will generate a lot of brand-new applications. Imagine what we can do in education if schoolroom activity can be captured. Or in airport security, if we can detect potential terrorists among regular travelers.
Finally, there will be autonomous AI, which will power robots and autonomous vehicles, which will completely turn upside down existing products, companies, and business models, requiring companies to completely rebuild themselves.
So you are saying that big data AI is already with us and sensor-based AI and autonomous AI isn't yet? If so, when will the last two technologies come of age?
They are already arriving. Amazon Echo is an example of that. In China, there are four computer vision companies, which lead the world in the implementation of perception-based AI using computer-vision-based technologies. It's used for things like security, recognizing and verifying identity at airports and on your phones. It's used in applications like Snapchat. Anything with a camera or a multitude of cameras can be used, and faces are just the beginning, because once you recognize faces you can begin to understand movement and intention, moving onto a full-scene understanding. That's the ultimate goal.
Autonomous vehicles are also at the trial stage at this point, but AI works better with lots of data. With the gathering of more data, whether it's by Waymo or the Chinese companies, it's progressing very fast. There will be some “low-hanging fruits,” applications which will occur quite soon. For example, trucks on highways will be implemented faster than passenger cars on city roads, and there will be many other non-road examples, such as airport parking, that don't require a full understanding of traffic. The beauty of this is, you’re gathering data with these early applications, improving the technology as you go along, so you don't have to do it all at once.
Speaking of autonomous vehicles, what does the fatal Uber accident tell us about trust in algorithms?
I think the Uber accident tells us two things. One is that Uber seems unprepared for launch. I think we need to be watchful to ensure that the technology companies are fully ready when they deploy—no one is forcing you to launch on city roads at night. You could launch on easier roads, highways, during the day to accumulate experience and data. So we need to have stepwise adoption.
But the second thing is, even if Uber were perfectly ready, there will still be accidents. I think when humans look at single incidents and point the finger at an entire technology, that is not fair. I think we need to benchmark accident rates of autonomous vehicles against human drivers and make sure we can sleep at night knowing that, on the average at least, the technology performs better than humans.
You might be interested in
AI has entered the business world. What happens next?
Browse the CollectionOne of the interesting things about AI is that it fits perfectly what the economists call a "trust good"—a good that a normal person cannot reasonably be expected to understand and judge directly. Even an expert may not know why a particular decision has been reached by AI. So it raises the problem, how do you know if you can trust the AI, because the AI will often be encountering situations that it hasn't specifically been tested for. It's interesting that during the recent Facebook controversy, Facebook has effectively said, "Maybe we should be regulated." What's your view of regulation and other measures that could help us trust AI for critical applications?
I think regulation is clearly needed. But it should be on an application-specific basis. You can't regulate a technology in a vacuum. So AI for autonomous vehicles should absolutely be regulated, but not AI in general. We will need to apply domain-specific expertise in each area both to regulate effectively and also to avoid expecting AI to do what it cannot do in each case.
A related point is that anthropomorphism here is dangerous: we should not expect AI to “explain” everything as humans do—it will not always be possible. I do agree we should do our best to have explainable AI, but we can't expect AI to give reasons like humans. We should remind ourselves that human reasons are not always good, accurate, or truthful either!
It’s sometimes suggested that airline safety is a good model for AI: air travel used to be less safe, but now it is one the safest modes of transport, due to regulation, accident investigation, iterative learning, and improvement of the technologies. Do you think it's a good precedent?
Yes, that would be my favorite, too. Because it is ultimately about safety, and secondarily about efficiency. Safety first, efficiency second. Safety is easy to measure—it's basically the number of injuries.
In this way, we can have statistically safer AI, but again we should not expect that all AI decisions make sense to humans. If AI is twice as safe as humans but can't explain 10% of its actions in a human-like manner, I hope we have the collective wisdom to accept that and not demand that machines do all the explaining that humans do.
You talked about new functionalities and the improved efficiency of business models employing AI. But, as with any information technology, we must ask not only about functionality but also about competitive advantage. How will AI shape the nature of competitive advantage? Does AI change the nature of competitive advantage, or does it merely raise efficiency equally for all players?
If you look at it at a micro level, you used to do credit card fraud without AI, now you do it with AI. You used to do customer service without AI, now you do it with AI. So within each silo it will have incremental benefits. But I think the biggest benefit will come when you rethink the entire business model. That may or may not be possible in every domain. But take autonomous vehicles as an example—it's not a new button on your Tesla, it is really a whole new model of transportation. So the total benefit of AI cannot be reached until we rethink transportation as a whole.
And will that benefit be available to all companies in an industry, or will it be a winner-takes-all phenomenon?
It will tend to have a winner-take-all effect.
Because of data?
Yes because of data. But not only data—there will be rules and regulations in different countries that will also influence which companies will succeed. There will also be first-mover advantages, just like in the internet business. It will differ by industry. Uber will not dominate the world, for example, because geographies can be relatively separate with respect to transportation. Google and Facebook, on the other hand, will tend to be dominant, just because of the purely digital nature of their businesses. Sometimes the entire industry will be reshaped, and new players will win. In the autonomous-vehicle industry, for example, I think we'll end up with electric cars and a sharing economy—people will not buy cars anymore. Cars will be available on demand. The cost will be low. Safety will be high. There will be little pollution. And people will migrate to the new model. Society may eventually outlaw people driving on the roads because they’re not safe anymore!
A number of national leaders have said that AI is not just a new technology, it's the technology that will determine the competitiveness of nations. Do you agree, and how will AI shape the competitiveness of nations?
I agree and disagree. On the one hand, I feel AI is a very collaborative area. AI academics are happy to publish in real time and even put up open-source code for others to use. So I see a huge amount of international collaboration. Scientists in the community demand repeatability from AI, which is wonderful because it forces people to be open with their code.
But it’s also true that AI will cause some economies to move forward more than others. The US has Google, Facebook, and Microsoft, which gives it a huge advantage, and China has its own set of leading companies. So it's hard to say there is not fierce competition. But we shouldn’t be too alarmist about this. AI is largely an enabler, with some military applications, as compared to, say, nuclear technology, where military applications are more dominant.
I do think there will be only a few countries that will do extremely well with AI, because there will be two things that will drive competitiveness. First, the powerful countries will get more powerful, because they have more data. So the Googles and the Baidus will have a natural advantage themselves, and so therefore will the countries to which they pay taxes. I think this phenomenon will continue—the strong getting stronger, even more so than ever before in history. This is because oligopoly used to be driven by brand, product, user loyalty, competitive behavior, and the like. But now it's additionally reinforced by the virtuous cycle of more data, driving better products and algorithms, making it harder and harder for newcomers to build a strong position. So the countries that already have the giants will have a sustained leadership position.
The second factor is that the countries that have structurally more data will have a natural advantage. Our national data environments are very different. The US uses Facebook, people in China use WeChat. The data are not aggregated between the two user groups, so whichever countries have more people and more data, and more lenient use of data—I'm not saying it's entirely a good thing, but it's the way it is—will have an advantage. From all of these considerations it’s clear that the US and China will be massively advantaged.
I was just about to ask you about the geopolitical implications. Not just with AI, but with digital technologies in general, a handful of American companies are dominant in most geographies, and the main exception is China, where you have your own set of powerful companies. And this is already creating tensions around data privacy, taxation, and national competitiveness. Now if we add AI to the mix, how does history play out at the levels of technology, economics, and politics?
I'm not an expert in politics, but the economics will drive us to a world of American giants versus Chinese giants. I think it would be naïve to assume the Chinese companies are content by just being in China. Certainly China is a huge market, but no self-respecting company would be content without having global ambition. So we should fully expect, from a purely economic standpoint, that American companies will need to face a new world. In the old world, Wintel dominated—I mean the whole world, there was no exception. Then American internet startups dominated most of the world—Facebook, Google, and, in the early days, Yahoo. And it might seem reasonable for American companies to assume that the world is theirs to take. But that has all changed dramatically, because the Chinese technologies, entrepreneurs, and products are arguably as good as, and in many cases perhaps better than, the American equivalents.
So you see our digital future being more of a multipolar world?
Yes, but there will be natural patterns and inclinations. I think the US will naturally continue its hegemony in the US, in English-speaking countries, and in Western Europe. There will little chance for China in those countries. China will make strong inroads into Southeast Asia, because it's closer in culture, and in demographics, with a rise of young people with lower income but lots of time and brought up with mobile technology. Southeast Asia, Islamic countries, Africa—those will be China's strongholds. South America is the unknown. We'll see. And that's pretty much the world.
American companies are so used to having the whole world. And I think many of them still think that way. And because they feel that way, they imagine their world expansion plans can take their time. Go after the rich countries first, make the most money, and then take time to develop the Middle East and Southeast Asia. For Facebook in Indonesia and India, there was still time, and there was no Chinese competitor. But the whole game has now changed in this respect.
Last question. For any new technology there are speculations, misunderstandings, and exaggerations. When you hear CEOs talking about AI, and what AI can do, and what they should do, what do you think are the most dangerous misconceptions of companies?
Most are innocuous and will naturally correct over time. I think the most dangerous misconception is that the human-AI combination will play out symbiotically and harmoniously over time. Namely: "Oh it's a great tool, we'll give it to that department. And if we need 10% less workforce in that department, we'll train them to go to another department." AI is an engine that will continue to improve and beat humans in all routine tasks. So there is no tomorrow for people who are doing routine tasks. I think CEOs need to be aware—in some cases, it might be tomorrow; in some cases, it might be five or ten years from now—that they're going to be looking at, based on existing job categories and tasks, a significant reduction in workforce.
CEOs tend to build on their past experience. When typewriters came, we laid off some people, we retrained others. When calculators came or computers came, we had some rotation, we retrained, there were some layoffs, 5% to 10%, but we took care of that. But this time, it's going to be a much larger number. And the giant tech companies are saying, (a) this is symbiotic, we make you better at your work, and (b) you should think about retraining your workforce and everything will be okay. Yes, I believe in lifelong learning, but you can't retrain someone who is in telesales to be a director of PR. Because the latter is the job that will remain. In general, empathetic and complex jobs are safest from disruption.
Yes, in the past, you could shuffle people around. When you don't need telephone switchboard operators, you could teach them to be customer service reps, and so on. But now, the great majority of routine tasks will be replaced, and the tools will actually replace people. It's not always going to be a symbiotic outcome. When the critical competency threshold of AI is reached, the job is done. Keeping your 5,000 customer service reps, using them symbiotically with your AI tools to gain 0.1% in customer satisfaction, is not a reasonable economic proposition. Replacing 80% of them and using 20% as points of escalation, that will clearly be the way to go.
So thinking ahead, you need to plan for what to do with these displaced people. And in terms of retraining and planning for this kind of sudden change in workforce, I think CEOs and CHROs are not ready for that at all.
But let me end on a brighter note. Being liberated from routine tasks could be a wonderful thing for mankind. We need not only to change work at a technical level, but entirely reconceive the nature of work itself.
Kai-fu Lee. That you very much for sharing your ideas with us on the impact of AI.
The BCG Henderson Institute is Boston Consulting Group’s strategy think tank, dedicated to exploring and developing valuable new insights from business, technology, and science by embracing the powerful technology of ideas. The Institute engages leaders in provocative discussion and experimentation to expand the boundaries of business theory and practice and to translate innovative ideas from within and beyond business. For more ideas and inspiration from the Institute, please visit Featured Insights.
Managing Director & Senior Partner, Director of the BCG Henderson Institute
San Francisco - Bay Area