<?xml version="1.0" encoding="UTF-8"?>
<StrategicPlan xmlns="urn:ISO:std:iso:17469:tech:xsd:stratml_core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:ISO:std:iso:17469:tech:xsd:stratml_core http://xml.govwebs.net/stratml/references/StrategicPlanISOVersion20140401.xsd"><Name>PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE</Name><Description>As a contribution toward preparing the United States for a future in which Artificial Intelligence (AI)
plays a growing role, we survey the current state of AI, its existing and potential applications, and the
questions that are raised for society and public policy by progress in AI. We also make recommendations
for specific further actions by Federal agencies and other actors. A companion document called the
National Artificial Intelligence Research and Development Strategic Plan lays out a strategic plan for
Federally-funded research and development in AI.</Description><OtherInformation>Developing and studying machine intelligence can help us better understand and appreciate our human
intelligence. Used thoughtfully, AI can augment our intelligence, helping us chart a better and wiser path
forward.

[Editor's note: This StratML rendition documents the recommendations as goals.]</OtherInformation><StrategicPlanCore><Organization><Name>Executive Office of the President</Name><Acronym>EOP</Acronym><Identifier>_f30b891c-2050-11e9-85d5-0d16d9e8efbc</Identifier><Description/><Stakeholder StakeholderTypeType="Organization"><Name>National Science and Technology Council</Name><Description>Committee on Technology -- 

The National Science and Technology Council (NSTC) is the principal means by which the Executive
Branch coordinates science and technology policy across the diverse entities that make up the Federal
research and development (R&amp;D) enterprise. One of the NSTC’s primary objectives is establishing clear
national goals for Federal science and technology investments. The NSTC prepares R&amp;D packages aimed
at accomplishing multiple national goals. The NSTC’s work is organized under five committees:
Environment, Natural Resources, and Sustainability; Homeland and National Security; Science,
Technology, Engineering, and Mathematics (STEM) Education; Science; and Technology. Each of these
committees oversees subcommittees and working groups that are focused on different aspects of science
and technology. More information is available at www.whitehouse.gov/ostp/nstc.</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Office of Science and Technology Policy</Name><Description>The Office of Science and Technology Policy (OSTP) was established by the National Science and
Technology Policy, Organization, and Priorities Act of 1976. OSTP’s responsibilities include advising the
President in policy formulation and budget development on questions in which science and technology are
important elements; articulating the President’s science and technology policy and programs; and fostering
strong partnerships among Federal, state, and local governments, and the scientific communities in industry
and academia. The Director of OSTP also serves as Assistant to the President for Science and Technology
and manages the NSTC. More information is available at www.whitehouse.gov/ostp.</Description></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>John P. Holdren</Name><Description>Assistant to the President for Science and Technology &amp;  
Director, Office of Science and Technology Policy</Description></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>Megan Smith</Name><Description>U.S. Chief Technology Officer</Description></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>Afua Bruce</Name><Description>Executive Director,
Office of Science and Technology Policy</Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Subcommittee on Machine Learning and Artificial Intelligence</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>Ed Felten</Name><Description>Co-Chair -- Deputy U.S. Chief Technology Officer,
Office of Science and Technology Policy</Description></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>Michael Garris</Name><Description>Co-Chair -- 
Senior Scientist,
National Institute of Standards and Technology,
U.S. Department of Commerce</Description></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>Terah Lyons</Name><Description>Executive Secretary -- 
Policy Advisor to the U.S. Chief Technology Officer,
Office of Science and Technology Policy </Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Federal Departments &amp; Agencies</Name><Description>The following Federal departments and agencies are represented on the Subcommittee on Machine Learning and Artificial Intelligence and through it, work together to monitor the state of the art in machine learning (ML) and AI (within the Federal Government, in the private sector, and internationally), to watch for the arrival of important technology milestones in the development of AI, to coordinate the use of and foster the sharing of knowledge and best practices about ML and AI by the Federal Government, and to consult in the development of Federal research and development priorities in AI:</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Commerce</Name><Description>Co-Chair</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Defense</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Education</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Energy</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Health and Human Services</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Homeland Security</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Justice</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Labor</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of State</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Transportation</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Treasury</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Department of Veterans Affairs</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>United States Agency for International Development</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Central Intelligence Agency</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>General Services Administration</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>National Science Foundation</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>National Security Agency</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>National Aeronautics and Space Administration</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Office of the Director of National Intelligence</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Social Security Administration</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>EOP Offices</Name><Description>The following offices of the Executive Office of the President are also represented on the Subcommittee:</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Council of Economic Advisers</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Domestic Policy Council</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Office of Management and Budget</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Office of Science and Technology Policy</Name><Description>Co-Chair</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Office of the Vice President</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>National Economic Council</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>National Security Council </Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Stakeholders</Name><Description>Preparing for the Future -- 
AI holds the potential to be a major driver of economic growth and social progress, if industry, civil
society, government, and the public work together to support development of the technology with
thoughtful attention to its potential and to managing its risks.
</Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Industry</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Civil Society</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Government</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>The Public</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>U.S. Government</Name><Description>The U.S. Government has several roles to play. It can convene conversations about important issues and
help to set the agenda for public debate. It can monitor the safety and fairness of applications as they
develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It can
provide public policy tools to ensure that disruption in the means and methods of work enabled by AI
increases productivity while avoiding negative economic consequences for certain sectors of the
workforce. It can support basic research and the application of AI to public good. It can support
development of a skilled, diverse workforce. And government can use AI itself to serve the public faster, more effectively, and at lower cost. Many areas of public policy, from education and the economic safety
net, to defense, environmental preservation, and criminal justice, will see new opportunities and new
challenges driven by the continued progress of AI. The U.S. Government must continue to build its
capacity to understand and adapt to these changes.</Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Practitioners</Name><Description>As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are
governable; that they are open, transparent, and understandable; that they can work effectively with
people; and that their operation will remain consistent with human values and aspirations.</Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Researchers</Name><Description>Researchers
and practitioners have increased their attention to these challenges, and should continue to focus on them. </Description></Stakeholder></Organization><Vision><Description>AI augments our intelligence to chart a better and wiser path forward</Description><Identifier>_f30b8af2-2050-11e9-85d5-0d16d9e8efbc</Identifier></Vision><Mission><Description>To survey the state of AI and make recommendations for action</Description><Identifier>_f30b8eda-2050-11e9-85d5-0d16d9e8efbc</Identifier></Mission><Value><Name>Public Good</Name><Description>Applications of AI for Public Good -- 
One area of great optimism about AI and machine learning is their potential to improve people’s lives by
helping to solve some of the world’s greatest challenges and inefficiencies. Many have compared the
promise of AI to the transformative impacts of advancements in mobile computing. Public- and privatesector investments in basic and applied R&amp;D on AI have already begun reaping major benefits to the
public in fields as diverse as health care, transportation, the environment, criminal justice, and economic
inclusion. The effectiveness of government itself is being increased as agencies build their capacity to use
AI to carry out their missions more quickly, responsively, and efficiently.</Description></Value><Value><Name>Regulation</Name><Description>AI and Regulation -- 
AI has applications in many products, such as cars and aircraft, which are subject to regulation designed
to protect the public from harm and ensure fairness in economic competition. How will the incorporation
of AI into these products affect the relevant regulatory approaches? In general, the approach to regulation
of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk
that the addition of AI may reduce alongside the aspects of risk that it may increase. If a risk falls within
the bounds of an existing regulatory regime, moreover, the policy discussion should start by considering
whether the existing regulations already adequately address the risk, or whether they need to be adapted
to the addition of AI. Also, where regulatory responses to the addition of AI threaten to increase the cost
of compliance, or slow the development or adoption of beneficial innovations, policymakers should
consider how those responses could be adjusted to lower costs and barriers to innovation without
adversely impacting safety or market fairness.
Currently relevant examples of the regulatory challenges that AI-enabled products present are found in
the cases of automated vehicles (AVs, such as self-driving cars) and AI-equipped unmanned aircraft
systems (UAS, or “drones”). In the long run, AVs will likely save many lives by reducing driver error and
increasing personal mobility, and UAS will offer many economic benefits. Yet public safety must be
protected as these technologies are tested and begin to mature. The Department of Transportation (DOT)
is using an approach to evolving the relevant regulations that is based on building expertise in the
Department, creating safe spaces and test beds for experimentation, and working with industry and civil
society to evolve performance-based regulations that will enable more uses as evidence of safe operation
accumulates.</Description></Value><Value><Name>Workforce</Name><Description>Research and Workforce -- 
Government also has an important role to play in the advancement of AI through research and
development and the growth of a skilled, diverse workforce. A separate strategic plan for Federally funded AI research and development is being released in conjunction with this report. The plan discusses
the role of Federal R&amp;D, identifies areas of opportunity, and recommends ways to coordinate R&amp;D to
maximize benefit and build a highly-trained workforce.
Given the strategic importance of AI, moreover, it is appropriate for the Federal Government to monitor
developments in the field worldwide in order to get early warning of important changes arising elsewhere
in case these require changes in U.S. policy.
The rapid growth of AI has dramatically increased the need for people with relevant skills to support and
advance the field. An AI-enabled world demands a data-literate citizenry that is able to read, use,
interpret, and communicate about data, and participate in policy debates about matters affected by AI. AI
knowledge and education are increasingly emphasized in Federal Science, Technology, Engineering, and
Mathematics (STEM) education programs. AI education is also a component of Computer Science for
All, the President’s initiative to empower all American students from kindergarten through high school to
learn computer science and be equipped with the computational thinking skills they need in a technology-driven world.</Description></Value><Value><Name>Economy</Name><Description>Economic Impacts of AI -- 
AI’s central economic effect in the short term will be the automation of tasks that could not be automated
before. This will likely increase productivity and create wealth, but it may also affect particular types of
jobs in different ways, reducing demand for certain skills that can be automated while increasing demand
for other skills that are complementary to AI. Analysis by the White House Council of Economic
Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and
that there is a risk that AI-driven automation will increase the wage gap between less-educated and moreeducated workers, potentially increasing economic inequality. Public policy can address these risks,
ensuring that workers are retrained and able to succeed in occupations that are complementary to, rather
than competing with, automation. Public policy can also ensure that the economic benefits created by AI
are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.</Description></Value><Value><Name>Fairness</Name><Description>Fairness, Safety, and Governance -- 
As AI technologies move toward broader deployment, technical experts, policy analysts, and ethicists
have raised concerns about unintended consequences of widespread adoption. Use of AI to make
consequential decisions about people, often replacing decisions made by human-driven bureaucratic
processes, leads to concerns about how to ensure justice, fairness, and accountability—the same concerns
voiced previously in the Administration’s Big Data: Seizing Opportunities, Preserving Values report of
2014,1
as well as the Report to the President on Big Data and Privacy: A Technological Perspective
published by the President’s Council of Advisors on Science and Technology in 2014.</Description></Value><Value><Name>Governance</Name><Description/></Value><Value><Name>Transparency</Name><Description>Transparency
concerns focus not only on the data and algorithms involved, but also on the potential to have some form
of explanation for any AI-based determination. Yet AI experts have cautioned that there are inherent
challenges in trying to understand and predict the behavior of advanced AI systems.</Description></Value><Value><Name>Safety</Name><Description>Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are
exposed to the full complexity of the human environment. A major challenge in AI safety is building
systems that can safely transition from the “closed world” of the laboratory into the outside “open world”
where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet
necessary for safe operation. Experience in building other types of safety-critical systems and
infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and
how to communicate with stakeholders about risk.
At a technical level, the challenges of fairness and safety are related. In both cases, practitioners strive to
avoid unintended behavior, and to generate the evidence needed to give stakeholders justified confidence
that unintended failures are unlikely.</Description></Value><Value><Name>Ethics</Name><Description>Ethical training for AI practitioners and students is a necessary part of the solution. Ideally, every student
learning AI, computer science, or data science would be exposed to curriculum and discussion on related
ethics and security topics.</Description></Value><Value><Name>Tools</Name><Description>However, ethics alone is not sufficient. Ethics can help practitioners understand
their responsibilities to all stakeholders, but ethical training should be augmented with technical tools and
methods for putting good intentions into practice by doing the technical work needed to prevent
unacceptable outcomes.</Description></Value><Value><Name>Methods</Name><Description/></Value><Value><Name>Outcomes</Name><Description/></Value><Value><Name>Security</Name><Description>Global Considerations and Security -- 
AI poses policy questions across a range of areas in international relations and security. AI has been a
topic of interest in recent international discussions as countries, multilateral institutions, and other
stakeholders have begun to access the benefits and challenges of AI. Dialogue and cooperation between
these entities could help advance AI R&amp;D and harness AI for good, while also addressing shared
challenges.
Today’s AI has important applications in cybersecurity, and is expected to play an increasing role for both
defensive and offensive cyber measures. Currently, designing and operating secure systems requires
significant time and attention from experts. Automating this expert work partially or entirely may increase
security across a much broader range of systems and applications at dramatically lower cost, and could
increase the agility of the Nation’s cyber-defenses. Using AI may help maintain the rapid response
required to detect and react to the landscape of evolving threats.
Challenging issues are raised by the potential use of AI in weapon systems. The United States has
incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of
weapons and safer, more humane military operations. Nonetheless, moving away from direct human
control of weapon systems involves some risks and can raise legal and ethical questions.</Description></Value><Value><Name>Humanity</Name><Description>The key to incorporating autonomous and semi-autonomous weapon systems into American defense
planning is to ensure that U.S. Government entities are always acting in accordance with international
humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies
to develop standards related to the development and use of such weapon systems. The United States has
actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and
anticipates continued robust international discussion of these potential weapon systems.</Description></Value><Value><Name>Law</Name><Description>Agencies across
the U.S. Government are working to develop a single, government-wide policy, consistent with
international humanitarian law, on autonomous and semi-autonomous weapons.</Description></Value><Goal><Name>Society</Name><Description>Leverage AI and machine learning in ways that will benefit society</Description><Identifier>_f30b91fa-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>1</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>Private Institutions</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Public Institutions</Name><Description/></Stakeholder><OtherInformation>Private and public institutions are encouraged to examine whether and how they
can responsibly leverage AI and machine learning in ways that will benefit society. Social justice and
public policy institutions that do not typically engage with advanced technologies and data science in
their work should consider partnerships with AI researchers and practitioners that can help apply AI
tactics to the broad social problems these institutions already address in other ways.</OtherInformation><Objective><Name/><Description/><Identifier>_f30b9380-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Open Data &amp; Standards</Name><Description>Prioritize open training data and open data standards in AI</Description><Identifier>_f30b94e8-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>2</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>Federal Agencies</Name><Description/></Stakeholder><OtherInformation>Federal agencies should prioritize open training data and open data standards in AI. The government should emphasize the release of datasets that enable the use of AI to address social challenges. Potential steps may include developing an “Open Data for AI” initiative with the objective of releasing a significant number of government data sets to accelerate AI research and galvanize the use of open data standards and best practices across government, academia, and the private sector.</OtherInformation><Objective><Name/><Description/><Identifier>_f30b9678-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Missions</Name><Description>Improve the capacity of agencies to apply AI to their missions</Description><Identifier>_f30b97ea-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>3</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name/><Description/></Stakeholder><OtherInformation>The Federal Government should explore ways to improve the capacity of key
agencies to apply AI to their missions. For example, Federal agencies should explore the potential to create DARPA-like organizations to support high-risk, high-reward AI research and its application, much as the Department of Education has done through its proposal to create an “ARPA-ED,” to support R&amp;D to determine whether AI and other technologies could significantly improve student learning outcomes.</OtherInformation><Objective><Name/><Description/><Identifier>_f30b995c-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>AI CoP</Name><Description>Develop a community of practice for AI practitioners</Description><Identifier>_f30b9b28-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>4</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI practitioners</Name><Description/></Stakeholder><OtherInformation>The NSTC MLAI subcommittee should develop a community of practice for AI
practitioners across government. Agencies should work together to develop and share standards and best practices around the use of AI in government operations. Agencies should ensure that Federal employee training programs include relevant AI opportunities.</OtherInformation><Objective><Name/><Description/><Identifier>_f30b9ca4-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Regulatory Policy</Name><Description>Draw on appropriate technical expertise at the senior level when setting regulatory policy for AI-enabled products</Description><Identifier>_f30b9e20-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>5</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name/><Description/></Stakeholder><OtherInformation>Agencies should draw on appropriate technical expertise at the senior level when setting regulatory policy for AI-enabled products. Effective regulation of AI-enabled products requires collaboration between agency leadership, staff knowledgeable about the existing regulatory framework and regulatory practices generally, and technical experts with knowledge of AI. Agency leadership should
take steps to recruit the necessary technical talent, or identify it in existing agency staff, and should ensure that there are sufficient technical “seats at the table” in regulatory policy discussions.</OtherInformation><Objective><Name/><Description/><Identifier>_f30b9fce-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Technology Perspectives</Name><Description>Foster a Federal workforce with more diverse perspectives on the current state of technology</Description><Identifier>_f30ba33e-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>6</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name/><Description/></Stakeholder><OtherInformation>Agencies should use the full range of personnel assignment and exchange models (e.g. hiring authorities) to foster a Federal workforce with more diverse perspectives on the current state of technology.</OtherInformation><Objective><Name/><Description/><Identifier>_f30ba4c4-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Data Sharing</Name><Description>Increase sharing of data for safety, research, and other purposes</Description><Identifier>_f30ba672-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>7</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>Department of Transportation</Name><Description>The Department of Transportation should work with industry and researchers on ways to increase sharing of data for safety, research, and other purposes. The future roles of AI in surface and air transportation are undeniable. </Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Federal Actors</Name><Description>Accordingly, Federal actors should focus in the near-term on developing increasingly rich sets of data, consistent with consumer privacy, that can better inform policy-making as these technologies mature.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30ba802-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Air Traffic Management</Name><Description>Develop and implement an advanced and automated air traffic management system</Description><Identifier>_f30ba988-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>8</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>U.S. Government</Name><Description>The U.S. Government should invest in developing and implementing an advanced and automated air traffic management system that is highly scalable, and can fully accommodate autonomous and piloted aircraft alike.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bab40-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Vehicles</Name><Description>Develop a framework for regulation to enable the safe integration of fully automated vehicles</Description><Identifier>_f30bacc6-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>9</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>Department of Transportation</Name><Description>The Department of Transportation should continue to develop an evolving
framework for regulation to enable the safe integration of fully automated vehicles and UAS, including novel vehicle designs, into the transportation system.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bae56-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Monitoring &amp; Reporting</Name><Description>Monitor developments in AI and report regularly to senior Administration leadership about AI milestones</Description><Identifier>_f30bb00e-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>10</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>NSTC Subcommittee on Machine Learning and Artificial Intelligence</Name><Description>The NSTC Subcommittee on Machine Learning and Artificial Intelligence
should monitor developments in AI, and report regularly to senior Administration leadership about the status of AI, especially with regard to milestones. </Description></Stakeholder><OtherInformation/><Objective><Name>Milestones</Name><Description>Update the list of milestones as knowledge advances</Description><Identifier>_f30bb1a8-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>10.1</SequenceIndicator><Stakeholder><Name/><Description/></Stakeholder><OtherInformation>The Subcommittee should update the list of milestones as knowledge advances and the consensus of experts changes over time.</OtherInformation></Objective><Objective><Name>Reports</Name><Description>Report to the public on AI developments when appropriate</Description><Identifier>_f30bb3a6-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>10.2</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>The Public</Name><Description/></Stakeholder><OtherInformation>The Subcommittee should consider reporting to the public on AI developments, when appropriate.</OtherInformation></Objective></Goal><Goal><Name>Other Countries</Name><Description>Monitor the state of AI in other countries</Description><Identifier>_f30bb572-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>11</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>U.S. Government</Name><Description>The Government should monitor the state of AI in other countries, especially
with respect to milestones.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bb70c-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Updates</Name><Description>Keep government updated on the general progress of AI in industry</Description><Identifier>_f30bb8a6-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>12</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>Industry</Name><Description>Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached soon.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bba7c-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Research</Name><Description>Prioritize basic and long-term AI research</Description><Identifier>_f30bbc2a-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>13</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>Federal Government</Name><Description>The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R&amp;D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R&amp;D in these areas.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bbdc4-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>AI Workforce</Name><Description>Ensure an appropriate increase in the size, quality, and diversity of the AI workforce</Description><Identifier>_f30bbfae-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>14</SequenceIndicator><Stakeholder StakeholderTypeType="Organization"><Name>NSTC Subcommittee on MLAI</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>NSTC Subcommittee on NITRD</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>NSTC Committee on Science, Technology, Engineering, and Education (CoSTEM)</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Researchers</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Specialists</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Users</Name><Description/></Stakeholder><OtherInformation>The NSTC Subcommittees on MLAI and NITRD, in conjunction with the NSTC Committee on Science, Technology, Engineering, and Education (CoSTEM), should initiate a study on the AI workforce pipeline in order to develop actions that ensure an appropriate increase in the size, quality, and diversity of the workforce, including AI researchers, specialists, and users.</OtherInformation><Objective><Name/><Description/><Identifier>_f30bc15c-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>U.S. Job Market</Name><Description>Further investigate the effects of AI and automation on the U.S. job market and outline recommended policy responses</Description><Identifier>_f30bc30a-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>15</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>U.S. Workers</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Executive Office of the President</Name><Description>The Executive Office of the President should publish a follow-on report by the end of this year, to further investigate the effects of AI and automation on the U.S. job market, and outline recommended policy responses.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bc4f4-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Decision Support</Name><Description>Ensure the efficacy and fairness of AI systems to make or provide decision support</Description><Identifier>_f30bc6a2-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>16</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>Federal Agencies</Name><Description>Federal agencies that use AI-based systems to make or provide decision support for consequential decisions about individuals should take extra care to ensure the efficacy and fairness of those systems, based on evidence-based verification and validation.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bc9fe-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Grants</Name><Description>Ensure that AI-based products or services purchased with Federal grant funds produce results in a sufficiently transparent fashion and are supported by evidence of efficacy and fairness</Description><Identifier>_f30bcc38-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>17</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>Federal Agencies</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>State Governments</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Local Governments</Name><Description/></Stakeholder><OtherInformation>Federal agencies that make grants to state and local governments in support of the use of AI-based systems to make consequential decisions about individuals should review the terms of grants to ensure that AI-based products or services purchased with Federal grant funds produce results in a sufficiently transparent fashion and are supported by evidence of efficacy and fairness.</OtherInformation><Objective><Name/><Description/><Identifier>_f30bcdfa-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Curricula</Name><Description>Include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science</Description><Identifier>_f30bcfd0-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>18</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>Schools</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Universities</Name><Description/></Stakeholder><OtherInformation>Schools and universities should include ethics, and related topics in security,
privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.</OtherInformation><Objective><Name/><Description/><Identifier>_f30bd192-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Maturation</Name><Description>Work together to continue progress toward a mature field of AI safety engineering</Description><Identifier>_f30bd318-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>19</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>AI Professionals</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Safety Professionals</Name><Description/></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Professional Societies</Name><Description/></Stakeholder><OtherInformation>AI professionals, safety professionals, and their professional societies should work together to continue progress toward a mature field of AI safety engineering.</OtherInformation><Objective><Name/><Description/><Identifier>_f30bd4a8-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>International Engagement Strategy</Name><Description>Develop a government-wide strategy on international engagement related to AI</Description><Identifier>_f30bd66a-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>20</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name/><Description/></Stakeholder><OtherInformation>The U.S. Government should develop a government-wide strategy on international engagement related to AI, and develop a list of AI topical areas that need international engagement and monitoring.</OtherInformation><Objective><Name>Topics</Name><Description>Develop a list of AI topical areas that need international engagement and monitoring</Description><Identifier>_f30bd7fa-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>20.1</SequenceIndicator><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>International Stakeholders</Name><Description>Deep engagement with key international stakeholders</Description><Identifier>_f30bd99e-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>21</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name>U.S. Government</Name><Description>The U.S. Government should deepen its engagement with key international stakeholders, including foreign governments, international organizations, industry, academia, and others, to exchange information and facilitate collaboration on AI R&amp;D.</Description></Stakeholder><OtherInformation/><Objective><Name/><Description/><Identifier>_f30bdbd8-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Cybersecurity</Name><Description>Account for the influence of AI on cybersecurity and of cybersecurity on AI</Description><Identifier>_f30bdd72-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>22</SequenceIndicator><Stakeholder StakeholderTypeType="Generic_Group"><Name/><Description/></Stakeholder><OtherInformation>Agencies’ plans and strategies should account for the influence of AI on cybersecurity, and of cybersecurity on AI. Agencies involved in AI issues should engage their U.S. Government and private-sector cybersecurity colleagues for input on how to ensure that AI systems and ecosystems are secure and resilient to intelligent adversaries. Agencies involved in cybersecurity issues should engage their U.S. Government and private sector AI colleagues for innovative ways to apply AI for effective and efficient cybersecurity.</OtherInformation><Objective><Name/><Description/><Identifier>_f30bdf0c-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Weapons</Name><Description>Develop a governmentwide policy on autonomous and semi-autonomous
weapons</Description><Identifier>_f30be0e2-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator>23</SequenceIndicator><Stakeholder><Name/><Description/></Stakeholder><OtherInformation>The U.S. Government should complete the development of a single, governmentwide policy, consistent with international humanitarian law, on autonomous and semi-autonomous
weapons.</OtherInformation><Objective><Name/><Description/><Identifier>_f30be286-2050-11e9-85d5-0d16d9e8efbc</Identifier><SequenceIndicator/><Stakeholder><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal></StrategicPlanCore><AdministrativeInformation><StartDate>2016-10-31</StartDate><PublicationDate>2019-01-24</PublicationDate><Source>https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf</Source><Submitter><GivenName>Owen</GivenName><Surname>Ambur</Surname><PhoneNumber/><EmailAddress>Owen.Ambur@verizon.net</EmailAddress></Submitter></AdministrativeInformation></StrategicPlan>