""

Related Expertise: Public Sector, Artificial Intelligence, Digital Government

AI Brings Science to the Art of Policymaking

By Jaykumar PatelMartin ManettiMatthew MendelsohnSteven MillsFrank FeldenLars Littig, and Michelle Rocha

Governments have started to rely on artificial intelligence (AI) to deliver services and improve operations, but the use of it to help shape policy is just beginning. The foundations of policymaking—specifically, the ability to sense patterns of need, develop evidence-based programs, forecast outcomes, and analyze effectiveness—fall squarely in AI’s sweet spot.

AI will not replace policymakers, but it can enable a comprehensive, faster, and more rigorous approach to policymaking in the short run. More broadly, AI can deliver on the promise of a government of the future that is more responsive and leaves no one behind. As AI enters the mainstream, these are tall but achievable aspirations for public policymaking.

AI is not a risk-free option. Its algorithms—the engines that generate intelligence out of raw data—can reinforce existing discriminatory practices. And its tools, such as facial recognition, can violate privacy protections. The solution to these shortcomings is to abide by the principles of what we call “responsible AI,” such as accountability, transparency, and fairness, rather than to abandon a capability with such potential. (See the sidebar “The Principles of Responsible AI in the Public Sector.”)

The Principles of Responsible AI in the Public Sector

In the recent past, several well-publicized lapses have illustrated the unintentional harm that can befall individuals or society when government AI systems are not designed, built, or implemented in a responsible manner. Notably, after college entrance exams were canceled because of the pandemic, the UK government used an algorithm that determined grades based on students’ past performance. The system reduced the grades of nearly 40% of students and led to accusations that it was biased against test takers from challenging socioeconomic backgrounds.

The general problem is that AI engines generate insights based on historical data that may have built-in bias. A housing or criminal justice policy built from an AI engine that has been fed data based on past discriminatory practices will unwittingly carry them forward.

Responsible AI is meant to help policymakers be thoughtful about risks, rights, ethics, and buy-in from both civil servants and citizens. It is particularly critical for the public sector to follow value-based principles when applying AI to policies that have significant impact on citizens’ lives. BCG’s responsible AI principles provide a comprehensive approach to protecting against these shortcomings. (See the exhibit below.)


These principles are generally applicable but are more or less relevant depending on the different policymaking steps. During the identification stage, for example, the principles of social and environmental impact and human plus AI are especially important. Keeping humans in the loop serves as a sanity check, improving the overall quality of policies, their cost-benefit calculations, and societal value. During implementation, fairness and equity ensures that policymakers avoid perpetuating unconscious bias or discriminatory practices.

Why Policymaking Is So Complex

By its nature, policymaking is lengthy, political, and bureaucratic. Processes meant to ensure public input are often most accessible to lobbyists and others who put private interests above public ones. And it’s time-consuming to marshal evidence in support of proposed policies or to prove their effectiveness. The policies themselves tend to be built around topics such as health or education that are narrower than the broad socioeconomic problems they are meant to address. Inefficiency, overlap, and contradiction abound.

Balancing competing political forces during policymaking will never be tidy. But data is the raw material and AI the tool that can allow policymakers to generate more effective, targeted, and cost-conscious policies that actually improve people’s lives.

AI and the Policymaking Cycle

Although governments want to use evidence to inform their policy decisions, that evidence is often incomplete, partially understood, or poorly integrated into decision making and policymaking. AI can enable both understanding and integration. Policymaking is not a single activity but a cyclical six-stage process of identification, formulation, adoption, implementation, and evaluation. (See the exhibit.) At each stage, AI can help policymakers generate more value and impact.

Identification. AI tools can rapidly synthesize large amounts of data and detect patterns. This capability is especially useful during a crisis, such as the current coronavirus pandemic, an environmental disaster, or food shortages. Machine learning can generate insights in near real time, allowing public-sector leaders and policymakers to take swift action.

In Australia, the Victoria State Government’s “syndromic surveillance” program tracks reported symptoms and patient characteristics in hospitals. Within four months of use, the tool allowed state officials to identify six public health concerns. The importance of such early-warning tools is particularly relevant in a COVID-19 era.

Formulation. Governments routinely try to forecast the projected costs, benefits, and outcomes of policy options. AI can turbocharge this analysis by providing speedy insights on much smaller subsets of populations and geographic regions.

In Quebec, economic development specialists are leveraging AI tools to develop a more-nuanced understanding of economic, labor, and education differences among subregions. By analyzing government, private-sector, third-party, and social media data, the government can localize and fine-tune economic development plans faster and more affordably than previously possible. Meanwhile, a Middle Eastern government has applied pattern-sensing tools to global trade data to improve the country’s balance of payment and establish more-advantageous trade policies.

Adoption. AI can play an important role in this stage, which is historically political. A legislative body passes a law; a regulatory agency issues a new rule. Armed with insights generated using AI during the prior stages, regulators and lawmakers will be better equipped to make more-informed decisions. They will have a firmer understanding of the issues, allowing them to better forecast a policy’s potential impact.

Implementation. A policy is only as good as its implementation. AI tools can help get policies implemented more efficiently through automation and near real-time analysis of feedback from the field. In New Orleans, for example, an emergency services agency wanted to create a data-driven policy to improve ambulance response times. The city relied on AI to optimize the placement of ambulances closest to where they were most needed. It took special care to design the algorithms to account for historical practices that prevented poor neighborhoods from receiving faster services.

Evaluation. Policies are also only as good as the tweaks that are made following implementation to fix what is not working in the field. AI tools can speed up assessment of things that need to change by identifying where a policy could be falling short or subject to fraud. In the UK, AI is helping government officials estimate the impact of a carbon tax on emissions and overall business productivity. Making that assessment is difficult due to the absence of a “counterfactual”: knowledge of what would have happened without the tax. But AI simulations help optimize tax rates to both curb emissions and maintain productivity.

How to Start

Since governments are already experimenting with and integrating AI into service delivery and operations, the time is ripe to use it to support good policymaking. Different levels of government have unique advantages in this domain. National governments are likely to have access to large data sets and can use AI to integrate topics and strengthen and coordinate national agendas. Closer to the ground, state and provincial governments have an opportunity to tailor policies to specific communities within their regions. Near the point of delivery, local governments will have the best opportunity to understand constituent perspectives, shape policy instruments, oversee implementation, and evaluate outcomes. In fact, AI allows regional and local governments, which often lack robust policy capabilities, to leapfrog forward in their policymaking.

In general, technology isn’t the primary barrier to successful AI implementation. This is true in both the private and public sectors. Our work in digital, technological, and AI transformations demonstrates that algorithms account for just 10% of a project’s success, while the broader technology and engineering environment is responsible for 20%. The remaining 70% depends on people and processes. In the public sector, this includes building trust among citizens and civil servants that AI is safe, responsible, and effective.

In other words, public officials can’t simply flip a switch to activate AI in policymaking. Rather, it requires a thoughtful approach to piloting projects, establishing priorities, building skills and capabilities, managing vendors, and ensuring public trust. These goals can be achieved by creating three main workstreams: building the business case, designing operating capabilities, and creating the data infrastructure.

Building the Business Case. Implementing AI for the sake of AI is foolhardy. It needs to be deployed in service of specific policy challenges, in sync with the organization’s capabilities, and with the acceptance of both employees and citizens. Adhering to the principles of responsible AI will help build that acceptance, especially among those who are technology skeptics.

A big-bang approach is unlikely to succeed. Most successful AI projects start small but with a plan to scale. It is easier, for example, to create AI-enabled scenarios for a proposed policy initially than to rely on AI to create a policy solution by detecting patterns in huge and complex data sets. Both are important capabilities, but the order of operations matters. As AI systems, governance, and capabilities all improve, organizations can take on bigger challenges. Governments will eventually want to create broad AI strategies that are fully integrated with the larger mission.

Designing Operating Capabilities. Returning to the 10%-20%-70% formulation, the success of introducing AI into policymaking depends largely on people and processes. Government agencies will need to adapt their operating models to succeed. Specifically, they will need a new mix of skills and talent inside the organization and a new set of technology partners outside of it. The US government, for example, has launched new offices to create a strategy for recruiting and developing digital and analytical talent.

Successful AI projects need more than the right people and skills. They also need the right context, or operating model. A culture of data gathering, synthetizing, and sharing may not be common at many government agencies. Adherence to the principles of responsible AI needs to be woven into the new culture.

Applying AI to policymaking will likely occur within the context of broader machine data and analytics initiatives. Accordingly, governments could also consider creating a data analytics office if they don’t already have one. Such an office could promote innovation and collaboration and ensure that the principles of responsible AI are embedded into projects. The UK has excelled on this front. The London Office of Technology and Innovation, for example, is a virtual hub that develops and supports data collaborations across public services in the city. Analysts are dispatched to projects on an as-needed basis, ensuring data science expertise is deployed efficiently in the field rather than cloistered in an ivory tower.

Deploying AI to assist in policymaking will likely require working with partners and vendors, especially when in-house capabilities are lacking. By seeking partners in the private sector and universities, government agencies can gain access to new data sources and insights, new ways of working, talent, and implementation expertise. An interesting example of partnership is the Five Eyes intelligence collaboration. This alliance among Australia, Canada, New Zealand, the UK, and the US was created after World War II to share intelligence but has broadened its mission to share knowledge about data analytics and AI.

Creating the Data Infrastructure. AI depends on a solid digital platform that has access to real-time data from many sources: open data, public-sector data, citizen data, third-party data, and so on. One of the first steps for government officials is to free data from silos and explore external data sets, such as social media channels, that could have unique value for policymaking. For example, the UN Global Pulse, a big data and AI initiative, is using information from mobile phone airtime purchases and anonymized call records to track poverty and influence health and food policy. Governments can also use this platform to build trust and transparency with the public by creating open data policies. Dubai, for example, sees open data as a critical element of becoming a smart city.

AI and the Government of the Future

One of the longer-term benefits of introducing AI into policymaking is the potential to break down the topic silos, such as education, health, and labor, that define and constrain government policies and processes. These topics also act as walls that separate related data sets that could generate better, broader insights if brought together. Even within the same topic, overlapping programs can create bureaucratic mazes. In many countries, layers of social benefit programs generate waste and unintended consequences. For a government struggling with assisting families in need, AI can help put resources where they will deliver the best results, whether that’s employment, education, or housing support.

By increasing the scale and type of information available to decision makers, AI can help governments tackle problems comprehensively rather than narrowly. The smart city model is emblematic of this joined-up approach to policy.


Governments have a choice. They can responsibly embrace AI and other digital technologies to enhance human decision making, or they can fall behind. Just as government leaders have received early shots to boost confidence in vaccines, leaders can also pave the way on AI. They can make strong business cases and build capabilities and data infrastructure. They can embed AI responsibly in policymaking to improve the lives of their citizens and society as a whole.

The authors would like to thank Nadim Abillama, Akram Awad, Elias Baltassis, Adrian Brown, Miguel Carrasco, Vincent Chin, Daniel Jackiewicz, Sarah Mousa, Lucie Robieux, Thea Snow, and Vera Wijaya for their invaluable contributions to this article.

Tech + Us: Monthly insights for harnessing the full potential of AI and tech.