By Jack Karsten
Recent advances in computing power, algorithms, and sensor technology in recent years have combined to rapidly expand the capabilities of artificial intelligence (AI). Applications exist not only in private sector industries like healthcare, finance, and retail, but in public sector settings like criminal justice and national security as well. Each of these areas present questions about the responsible development and deployment of AI. To discuss these issues, Governance Studies hosted the ninth annual A. Alfred Taubman Forum on Public Policy at the Brookings Institution on June 12. The event featured three panels of AI experts that drew from Brookings fellows, academia, government, and industry. The panelists discussed the challenges and opportunities of AI in national security, economic, and public policy contexts.
The national security panel discussion revolved around balancing capability with vulnerability, and experimentation with limitation. AI can expand human capability by identifying patterns in large datasets much more quickly and accurately, but it also introduces new vulnerabilities from malicious actors. AI can spot vulnerabilities in cybersecurity systems and either patch them or exploit them. Without certainty for how AI will eventually affect national security, experimentation can guide policymaking in the near term. The public will play an important role in setting ethical norms for its use as new uses of AI become widely adopted. To make AI accountable, the panelists emphasized the need to explain satisfactorily to the public why AI systems fail with negative consequences. However, there is a concern that potential adversaries are would observe fewer constraints in the development and use of AI.
When thinking about AI use, we must ask whether the technology best suits each purpose. In particular, AI is liable to reflect and magnify human biases; when trying to predict jail time, it might use previous data with harsher sentencing for minorities, and thus return biased results. In this way, the technology behind AI must be transparent and well-understood for it to be deployable. Society must be able to trust the technology to adopt it; people should know what data is used and how that it is used to make decisions. In addition to the lack of understanding of how the technology works, there is no clear understanding of what AI can and cannot do. It is important to recognize that today’s AI has its limits and that is may be less advanced than some fear.
Despite worries that the technology will replace humans in many jobs, AI is likely to augment human capabilities instead of completely displacing them. Certain populations are vulnerable to job loss, but current AI and machine learning can take over only small parts of most jobs. For example, AI can help oncologists detect tumors, but communicating to patients is a task best left to humans. In addition, AI can help with unequal distribution of doctors, as well as helping them focus on the tasks that most need their attention. AI could help in places with few oncologists or help them focus on the cases that show the most risk. Even though AI cannot automate all tasks, this does not mean that it will not affect the nature and types of jobs that exist in the future. The skills needed within jobs will certainly change, and education must adapt to prepare a future workforce. Schools should emphasize and value the “human” qualities that AI cannot replicate.
While government has traditionally been slow in reacting to technological change, it is vital to begin thinking about how to govern AI now, updating laws in education and privacy to reflect these changes. Although innovation should be encouraged, it is important to include protections for consumers and society as a whole. Because of the significant ramifications of AI, the discussion about policy and governance should start immediately. A shared dialogue should include civil society, engineers, academics, and policymakers, keeping in mind the societal effects of AI while developing the technology and discussing its policy implications. The sharing of best practices and codes of conduct will also be helpful, not just between corporations, but also globally. To this end, the U.S. should consider how its values apply to AI and try to create a national strategy for based on these values.
Miku Fujita contributed to this blog post.