Every day, in ways both small and large, more businesses are using AI to change the way they engage with customers and employees. But AI also faces mounting public skepticism. Rapid technological change and opacity surrounding how and why algorithms arrive at their recommendations feed this mistrust. So do well-publicized AI failures, such as the targeting of minority communities in social services fraud investigations or the assignment of different credit limits for men and women of similar financial backgrounds.
Companies can address this trust gap in 2023 by adopting responsible AI. They must become transparent about when and how products leverage AI, how algorithms influence business decisions, and the steps they’re taking to mitigate bias, privacy violations, and other risks. Companies that navigate these challenges successfully can win the loyalty of customers.
So far, however, few organizations have gotten their arms around the task. In a recent global survey of more than 1,000 executives by BCG and MIT Sloan Management Review, an overwhelming majority—84%—said that responsible AI should be a top management priority. Yet just 16% of their companies have mature programs for achieving that goal.
Companies must become transparent about when and how products leverage AI, how algorithms influence business decisions, and the steps they’re taking to mitigate bias, privacy violations, and other risks. Companies that navigate these challenges successfully can win the loyalty of customers.
There’s no technical silver bullet to address AI’s diverse, complex risks. Sure, there are best practices. In the US, for example, the White House’s recent Blueprint for an AI Bill of Rights mentions disparity assessments, privacy by design, and ensuring that data is representative and robust.
But achieving responsible AI requires more. It entails a holistic approach that addresses the full product life cycle—spanning data collection, risk assessment, testing and evaluation, training for end users, and more. It demands teams of individuals with different backgrounds, life experiences, and expertise. And it requires cooperation across the organization at all levels, from junior data scientists to the chief risk officer. This is the path to developing and operating systems that integrate human empathy, creativity, and care to ensure AI adheres to ethical imperatives while also achieving transformative business impact.
As a simple illustration of why it’s critical to combine human expertise and judgment with machine learning, consider an algorithm that effectively predicts the onset of a dangerous bacterial infection common in hospital settings. The upside is tremendous: The hospital can allocate physicians and nurses to minimize patient complications and prioritize urgent-care cases. But patient populations vary by geography. Will predictors apply in different contexts? Prioritizing the wrong cases for urgent care might lead to the neglect of critically ill patients. Data scientists can address these challenges by collaborating with doctors and nurses at specific hospitals. Involving these health care professionals can ensure that algorithms are trained on representative datasets and that the end users are able to interpret and act on the system’s outputs.
Teams with both engineers and end users with specific expertise can also better interpret the recommendations of AI models. A retail strategy executive working with data scientists, for example, can reconcile an algorithm’s recommendations for new store locations based on historic profitability with the company’s desire to engage new customer segments.
Such an approach can also deliver better products. Take cameras, which have long failed to deliver high-quality images for people with darker skin tones. Google did a nice job of addressing this problem by including in the product development process cinematographers and videographers who had prior experience working with people of color. Their feedback contributed to a more inclusive smartphone that appeals to a broader customer base.
As many organizations and customers have discovered, AI done wrong can cause considerable harm. But the opportunities presented by responsible AI extend beyond risk mitigation. It will be an important step toward strengthening trust between companies and customers, and building the foundation for more valuable relationships.