<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="stratmliso.xsl"?>
<StrategicPlan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:stratml="urn:ISO:std:iso:17469:tech:xsd:stratml_core"><Name>Incisive Analysis</Name><Description/><OtherInformation/><StrategicPlanCore><Organization><Name>Office of Incisive Analysis</Name><Acronym>IARPAIA</Acronym><Identifier>_7da5fb52-1493-11e5-80c3-278016f00357</Identifier><Description>The Office of Incisive Analysis (IA) focuses on maximizing insights from the massive, disparate, unreliable and dynamic data that are -- or could be -- available to analysts, in a timely manner. We are pursuing new sources of information from existing and novel data, and we are investigating innovative techniques that can be utilized in the processes of analysis. </Description><Stakeholder StakeholderTypeType="Organization"><Name>IARPA</Name><Description/></Stakeholder></Organization><Vision><Description/><Identifier>_7da5fd96-1493-11e5-80c3-278016f00357</Identifier></Vision><Mission><Description>To maximize insights from the massive, disparate, unreliable and dynamic data</Description><Identifier>_7da5feae-1493-11e5-80c3-278016f00357</Identifier></Mission><Value><Name>Diversity</Name><Description>Our programs are in diverse technical disciplines but have common features:</Description></Value><Value><Name>Commonalities</Name><Description/></Value><Value><Name>Partnership</Name><Description>Involve potential transition partners at all stages, beginning with the definition of success</Description></Value><Value><Name>Success</Name><Description/></Value><Value><Name>Trust</Name><Description>Create technologies that can earn the trust of the analyst user by providing the reasoning for results</Description></Value><Value><Name>Reasoning</Name><Description/></Value><Value><Name>Certainty</Name><Description>Address uncertainty and data provenance explicitly.</Description></Value><Value><Name>Provenance</Name><Description/></Value><Goal><Name>Aladdin</Name><Description>Combine the state-of-the-art in video extraction, audio extraction, knowledge representation, and search technologies to create a fast, accurate, robust, and extensible technology that supports the multimedia analytic needs</Description><Identifier>_7da5ff76-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>1</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Jill D. Crisman</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPAIA Finder</Name><Description>Related Program</Description></Stakeholder><OtherInformation>Massive numbers of video clips are generated daily on many types of consumer electronics and uploaded to the internet. In contrast to videos that are produced for broadcast or from planned surveillance, the "unconstrained" video clips produced by anyone who has a digital camera present a significant challenge for manual as well as automated analysis.

The Aladdin Video Program seeks to combine the state-of-the-art in video extraction, audio extraction, knowledge representation, and search technologies in a revolutionary way to create a fast, accurate, robust, and extensible technology that supports the multimedia analytic needs of the future.</OtherInformation><Objective><Name/><Description/><Identifier>_7da60048-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Babel</Name><Description>Develop speech recognition technology that can be applied to any human language in order to provide effective search capability</Description><Identifier>_7da60110-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>2</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Program Manager</Name><Description>Mary P. Harper</Description></Stakeholder><OtherInformation>The Babel Program is developing agile and robust speech recognition technology that can be rapidly applied to any human language in order to provide effective search capability for analysts to efficiently process massive amounts of real-world recorded speech. Today's transcription systems are built on technology that was originally developed for English, with markedly lower performance on non-English languages. These systems have often taken years to develop and cover only a small subset of the languages of the world. Babel intends to demonstrate the ability to generate a speech transcription system for any new language within one week to support keyword search performance for effective triage of massive amounts of speech recorded in challenging real-world situations.

The goal of the Babel Program is to develop methods to build speech recognition technology for a much larger set of languages than has hitherto been addressed. The Program requires innovations in how to rapidly model a novel language with significantly less training data that are also much noisier and more heterogeneous than what has been used in the current state-of-the-art. Babel's technical measures of success are focused on how well the generated model works to support effective word-based search of noisy channel speech in the languages to be investigated. The new methods are being systematized so that they can be applied rapidly to a novel underserved language.
Related Challenge:  ASpIRE Challenge</OtherInformation><Objective><Name/><Description/><Identifier>_7da601d8-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Finder</Name><Description>Develop technology that augments the analyst's abilities to address geolocation.</Description><Identifier>_7da602dc-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>3</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Program Manager</Name><Description>Jill D. Crisman</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Aladdin Video Program</Name><Description>Related Program</Description></Stakeholder><OtherInformation>It is common today for even consumer-grade cameras to tag the images and videos that they capture with the location of the image on the earth’s surface (geolocation). However, some imagery does not have a geolocation tag and it can be important to know the location of the camera, image, or objects in the scene. For this imagery, analysts work hard to deduce as much as they can using reference data from many sources, including overhead and ground-based images, digital elevation data, existing well-understood image collections, surface geology, geography, and cultural information. Such image/video geolocation is an extremely time-consuming and labor-intensive activity that often meets with limited success.

Several research and consumer-oriented systems have developed useful and relevant capabilities using techniques that include large-scale ground-level image acquisition, crowd-sourcing, and sophisticated image matching. These largely automated systems tend to work best in geographic areas with significant population densities or that are well traveled by tourists, and where the query image or video contains notable features such as mountains or buildings.

The Finder Program aims to build on existing research systems to develop technology that augments the analyst’s abilities to address the geolocation task. Required technical innovations include the 1) integration of analysts’ abilities and automated geolocation technologies to solve geolocation problems, 2) fusion of diverse, publicly-available, but often imperfect data sources, and 3) expansion of automated geolocation technologies to work efficiently and accurately over all terrain and large search areas. If successful, Finder will deliver rigorously tested solutions for the image/video geolocation task in any outdoor terrestrial location.</OtherInformation><Objective><Name/><Description/><Identifier>_7da6044e-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>ICArUS</Name><Description>Understand and model how humans engage in the sensemaking process.</Description><Identifier>_7da60534-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>4</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Rita M. Bush</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA KRNS Program</Name><Description>Related Program</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA MICrONS Program</Name><Description>Related Program</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA SHARP Program</Name><Description>Related Program</Description></Stakeholder><OtherInformation>Integrated Cognitive-Neuroscience Architectures for Understanding Sensemaking (ICArUS) -- 
Sensemaking refers to the remarkable human ability to detect patterns in data, and to infer the underlying causes of those patterns - even when the data are sparse, noisy, and uncertain. The focus of the ICArUS Program is to understand and model how humans engage in the sensemaking process, both during optimal and suboptimal (biased) performance. Of particular interest are cognitive biases related to attention, memory, and decision making.

A unique aspect of ICArUS is the focus on developing neuroscience-based cognitive models of sensemaking - that is, models whose functional architecture conforms closely to that of the human brain. A key assumption of the program is that adherence to the underlying biological principles of cognition will lead to the development of models that more accurately predict human sensemaking performance in both the cognitive and behavioral domains. Although the current context (task environment) of ICArUS is on geospatial sensemaking, the goal to model the fundamental mechanisms underlying sensemaking will nonetheless illuminate the process by which analysts make sense of a variety of intelligence data.</OtherInformation><Objective><Name>Cognitive Models</Name><Description>Deliver cognitive models that interface with a configurable, simulated geospatial task environment.</Description><Identifier>_7da6069c-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>4.1</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation>The primary deliverables of ICArUS will be cognitive models (instantiated as executable software) that interface with a configurable, simulated geospatial task environment. ICArUS cognitive models, in conjunction with the task environment, may be used by the analytic community (including methodologists, educators, and analysts) to examine how different analytic approaches and different task parameters affect analytic outcomes. These insights, in turn, may lead to the development of new structured analytic methods that improve analysis quality by minimizing the negative impact of human cognitive bias. In addition, by illuminating the underlying neural mechanisms that give rise to human sensemaking, ICArUS research will lay the groundwork for the development of a new generation of automated analysis tools that replicate the unique strengths of human sensemaking.</OtherInformation></Objective></Goal><Goal><Name>Janus</Name><Description>Improve the performance of face recognition tools.</Description><Identifier>_7da60b2e-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>5</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Mark Burge</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA BEST Program</Name><Description>Related Program</Description></Stakeholder><Stakeholder StakeholderTypeType="Person"><Name>Intelligence Analysts</Name><Description/></Stakeholder><OtherInformation>Intelligence analysts often rely on facial images to assist in establishing the identity of an individual, but too often, just examining the sheer volume of possibly relevant images and videos can be daunting. While biometric tools like automated face recognition could assist analysts in this task, current tools perform best on the well-posed, frontal facial photos taken for identification purposes. IARPA's Janus program aims to dramatically improve the current performance of face recognition tools by fusing the rich spatial, temporal, and contextual information available from the multiple views captured by today’s "media in the wild". The program will move beyond largely two-dimensional image matching methods used currently into more model-based matching that fuses all views from whatever video and stills are available. Data volume now becomes an integral part of the solution instead of an oppressive burden.

The program is seeking to fund rigorous, high-quality research which uses innovative and promising approaches drawn from a variety of fields to develop novel representational models capable of encoding the shape, texture, and dynamics of a face. Instead of relying on a "single best frame approach," these representations must address the challenges of Aging, Pose, Illumination, and Expression (A-PIE) by exploiting all available imagery. Technologies must support analysts working with partial information by addressing the uncertainties which arise when working with possibly incomplete, erroneous, and ambiguous data. The goal of the program is to test and validate techniques which have the potential to significantly improve the performance of biometric recognition in unconstrained imagery, to that end, the program will involve empirical testing of recognition performance across unconstrained videos, camera stills, and scanned photos exhibiting a broad range of real-world imaging conditions.

It is anticipated that successful teams will transcend conventional approaches to biometric recognition by drawing on the multidisciplinary expertise of researchers from the fields of pattern recognition and machine learning; computer vision and image processing; computer graphics and animation; mathematical statistics and modeling; and data visualization and analytics.</OtherInformation><Objective><Name/><Description/><Identifier>_7da60c32-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Knowledge Discovery &amp; Dissemination</Name><Description>Enable analysts to quickly produce actionable intelligence from multiple, disparate data sources.</Description><Identifier>_7da60d36-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>6</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Catherine Cotell</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Generic_Group"><Name>Intelligence Analysts</Name><Description/></Stakeholder><OtherInformation>The objective of KDD is to enable analysts to quickly produce actionable intelligence from multiple, disparate data sources, including new unanticipated data sets that become available to analysts. To meet this objective, KDD has developed multiple solutions to the challenges of large-scale data alignment efforts and advanced analytics capabilities.

The KDD program has three thrusts:

* Research in data alignment to quickly align the terminology and organization of new data sources to the analytic data model.
* Research in analytics to develop flexible algorithms that work across heterogeneous data sets.
* Engineer a prototype that contains the research products so that they can be assessed in a realistic IC environment.
The KDD research development approach is designed to ensure that efforts are relevant to the Intelligence Community (IC) and perform well on real data. Each year KDD teams deliver their research as software algorithms integrated into prototype systems. The prototypes are tested by IC analysts using new real data sets and realistic analytic challenge problems not previously seen by the teams. This ensures that the technology is quickly adaptable to new data and new problems. To meet flexibility objectives, research components are loosely coupled in the prototype systems to provide a modular approach for evaluating individual technologies.

The KDD program began in October 2010 with research teams composed of academic and commercial organizations. A base year evaluation in 2011 was used to exercise the prototype systems, rehearse the evaluation process and give researchers insight into IC analysis. Formal evaluations were conducted in 2012 and 2013 with results showing significant progress. The evaluation is designed to mature development of analytic tools and make them suitable for transition to operational environments.

In 2014, the program is focused on transition of technologies to IC partners and evaluation against partner data. KDD will work with transition partners to identify KDD candidate research components to fill gaps in partner analytic capabilities.</OtherInformation><Objective><Name>Alignment</Name><Description>Quickly align the terminology and organization of new data sources to the analytic data model</Description><Identifier>_7da60fac-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>6.1</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Analytics</Name><Description>Develop flexible algorithms that work across heterogeneous data sets</Description><Identifier>_7da610b0-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>6.2</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Prototype</Name><Description>Engineer a prototype that contains the research products so that they can be assessed in a realistic IC environment</Description><Identifier>_7da611c8-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>6.3</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Knowledge Representation</Name><Description>Develop and evaluate theories that explain how the human brain represents conceptual knowledge</Description><Identifier>_7da612c2-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>7</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>R. Jacob Vogelstein</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA ICArUS Program</Name><Description>Related Program
</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA MICrONS Program</Name><Description>Related Program</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA SHARP Program</Name><Description>Related Program</Description></Stakeholder><OtherInformation>Knowledge Representation in Neural Systems (KRNS) -- 
When making sense of intelligence data, analysts rely on rich repertoires of conceptual knowledge to resolve ambiguities, make inferences, and draw conclusions. Conceptual knowledge refers to knowledge about the general properties of an entity (e.g., an apple is edible) as well as its relationships to other entities (e.g., an apple is associated with orchards, grocery stores, etc.). Understanding how the human brain represents conceptual knowledge is a step toward building new analysis tools that acquire, organize and wield knowledge with unprecedented proficiency. Moreover, such understanding may lead to the development of novel techniques for training intelligence analysts and linguists.

The goal of the KRNS Program is to develop and rigorously evaluate theories that explain how the human brain represents conceptual knowledge. In part the evaluation will rest on how well concepts can be interpreted from neural activity patterns using algorithms derived from the theories. In addition to new theories and algorithms, KRNS seeks the development of innovative protocols for evoking and measuring concept-related neural activity using neural imaging methods such as (but not limited to) functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG).

Whereas previous research has examined the neural representations of single concepts in isolation, the KRNS Program seeks to greatly expand our understanding of how the brain represents combinations of concepts (e.g., how the neural representation of "the student was bored with the book" differs from the neural representations of the individual concepts "student," "bored," and "book"). A wide range of concept types is of interest in KRNS, from the concrete to the abstract, including: animate and inanimate objects; actions; physical and temporal settings; events; social roles; social interactions; emotions; properties; conditions.</OtherInformation><Objective><Name>Conceptual Combinations</Name><Description>Expand understanding of how the brain represents combinations of concepts, including: animate and inanimate objects; actions; physical and temporal settings; events; social roles; social interactions; emotions; properties; conditions</Description><Identifier>_7da613b2-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>7.1</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Metaphor</Name><Description>Exploit the use of metaphors by different cultures to gain insight into their cultural norms</Description><Identifier>_7da614de-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>8</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Catherine Cotell</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA SCIL Program</Name><Description>Related Program</Description></Stakeholder><OtherInformation>For decision makers to be effective in a world of mass communication and global interaction, they must understand the shared concepts and worldviews of members of other cultures of interest. Recognizing cultural norms is a significant challenge, however, because they tend to be hidden. Even cultural natives have difficulty defining them because they form the tacit backdrop against which members of a culture interact and behave. We tend to notice them only when they are in conflict with the norms of other cultures. Such differences may cause discomfort or frustration and may lead to flawed interpretations about the intent or motivation of others. If we are to interact successfully on the world stage, we must have resources that will help us recognize norms across cultures. The Metaphor Program will exploit the use of metaphors by different cultures to gain insight into their cultural norms.

Metaphors have been known since Aristotle (Poetics) as poetic or rhetorical devices that are unique, creative instances of language artistry (e.g., The world is a stage). Over the last 30 years, metaphors have been shown to be pervasive in everyday language and reveal how people in a culture define and understand the world around them.

* Metaphors shape how people think about complex topics and can influence beliefs.
* Metaphors can reduce the complexity of meaning associated with a topic by capturing or expressing patterns.
* Metaphors are associated with affect; affect influences behavior.
* Research on metaphors has uncovered inferred meanings and worldviews of particular groups or individuals: Characterization of disparities in social issues and contrasting political goals; exposure of inclusion and exclusion of social and political groups; understanding of psychological problems and conflicts.</OtherInformation><Objective><Name/><Description/><Identifier>_7da615e2-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Reynard</Name><Description>Identify behavioral indicators in virtual worlds and massively multiplayer online games (MMOGs) that are related to the RW characteristics of the users</Description><Identifier>_7da6177c-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>9</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Rita M. Bush</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA Sirius Program</Name><Description>Related Program</Description></Stakeholder><OtherInformation>Starting from the premise that Real World (RW) characteristics are reflected in VW behavior, the IARPA Reynard program sought to identify behavioral indicators in VWs and MMOGs that are related to the RW characteristics of the users. Performers in the Reynard program were expected to produce one or more VW behavioral indicators that serve to identify RW attributes of individuals or groups. Attributes of interest included the following: gender, approximate age, economic status, educational level, occupation, ideology or "world view," degree of influence, "digital native" vs "digital immigrant," approximate physical geographic location, native language, and culture.</OtherInformation><Objective><Name/><Description/><Identifier>_7da618b2-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>SCIL</Name><Description>Automatically identify social actions and characteristics of groups by examining the language used by the members of the groups</Description><Identifier>_7da61a4c-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>10</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Catherine Cotell</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>Metaphor</Name><Description>Related Program</Description></Stakeholder><OtherInformation>The Socio-cultural Content in Language (SCIL) Program intends to explore and develop novel designs, algorithms, methods, techniques and technologies to extend the discovery of the social goals of members of a group by correlating these goals with the language they use.

Language is used to do more than share information; people use it to reflect and establish social and cultural norms. The SCIL Program is attempting to exploit this fact and automatically identify social actions and characteristics of groups by examining the language used by the members of the groups. SCIL researchers are working in multiple languages, and machine translation is not permitted.</OtherInformation><Objective><Name/><Description/><Identifier>_7da61b64-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Holographic Observation</Name><Description>Enable full-parallax, full-color, high-resolution display of dynamic 3D data</Description><Identifier>_7da61c9a-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>11</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Karl F. Roenigk</Name><Description>Program Manager</Description></Stakeholder><OtherInformation>Synthetic Holographic Observation (SHO) -- 
SHO is creating technology to enable full-parallax, full-color, high-resolution display of dynamic 3D data without head-gear, and possessing visually continuous perspectives without artifacts over wide viewing angles. SHO will deliver workstation prototypes with interactive applications to enhance analyst effectiveness on real and fused 3D data of interest. Human factors will be addressed throughout the program. First and foremost, SHO will create technologies that present data in a natural and safe fashion; with fully matched depth-cues, a novel feature that is critical to sustained and comfortable viewing without associated fatigue.</OtherInformation><Objective><Name/><Description/><Identifier>_7da61db2-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator/><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal><Goal><Name>Sirius</Name><Description>Create Serious Games to train participants and measure their proficiency in recognizing and mitigating cognitive biases</Description><Identifier>_7da61f4c-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12</SequenceIndicator><Stakeholder StakeholderTypeType="Person"><Name>Rita M. Bush</Name><Description>Program Manager</Description></Stakeholder><Stakeholder StakeholderTypeType="Organization"><Name>IARPA Reynard Program</Name><Description>Related Program</Description></Stakeholder><OtherInformation>The goal of the Sirius Program is to create Serious Games to train participants and measure their proficiency in recognizing and mitigating the cognitive biases that commonly affect all types of intelligence analysis. The research objective is to experimentally manipulate variables in Virtual Learning Environments (VLE) to determine whether and how such variables might enable player-participant recognition and persistent mitigation of cognitive biases. The Program will provide a basis for experimental repeatability and independent validation of effects, and identify critical elements of design for effective analytic training in VLEs. The cognitive biases of interest that will be examined include: (1) Confirmation Bias, (2) Fundamental Attribution Error, (3) Bias Blind Spot, (4) Anchoring Bias, (5) Representativeness Bias, and (6) Projection Bias.</OtherInformation><Objective><Name>Confirmation Bias</Name><Description>Examine Confirmation Bias</Description><Identifier>_7da62096-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12.1</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Fundamental Attribution Error</Name><Description>Examine the Fundamental Attribution Error</Description><Identifier>_7da621ae-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12.2</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Bias Blind Spot</Name><Description>Examine the Bias Blind Spot</Description><Identifier>_7da622ee-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12.3</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Anchoring Bias</Name><Description>Examine Anchoring Bias</Description><Identifier>_7da62438-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12.4</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Representativeness Bias</Name><Description>Examine Representativeness Bias</Description><Identifier>_7da6255a-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12.5</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective><Objective><Name>Projection Bias</Name><Description>Examine Projection Bias</Description><Identifier>_7da62672-1493-11e5-80c3-278016f00357</Identifier><SequenceIndicator>12.6</SequenceIndicator><Stakeholder StakeholderTypeType=""><Name/><Description/></Stakeholder><OtherInformation/></Objective></Goal></StrategicPlanCore><AdministrativeInformation><PublicationDate>2015-06-16</PublicationDate><Source>http://www.iarpa.gov/index.php/about-iarpa/incisive-analysis</Source><Submitter><GivenName>Owen</GivenName><Surname>Ambur</Surname><PhoneNumber/><EmailAddress>Owen.Ambur@verizon.net</EmailAddress></Submitter></AdministrativeInformation></StrategicPlan>