The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is soliciting innovative research proposals for research and technology development that supports the building, evaluating, and fielding of algorithmic decision-makers that can assume human-off the- loop decision-making responsibilities in difficult domains, such as combat medical triage. Difficult domains are those where trusted decision-makers disagree; no right answer exists; and uncertainty, time-pressure, resource limitations, and conflicting values create significant decision-making challenges. Other examples of difficult domains include first response and disaster relief. Two specific domains have been identified for this effort – small unit triage in austere environments and mass casualty triage.
The Department of Defense (DoD) continues to expand its usage of Artificial Intelligence (AI) and computational decision-making systems. DoD missions involve making many decisions rapidly in challenging circumstances and algorithmic decision-making systems could address and lighten this load on operators. In order to employ such systems, the DoD needs rigorous, quantifiable, and scalable approaches for building and evaluating these systems. Current AI evaluation approaches often rely on datasets such as ImageNet1 for visual object recognition or the General Language Understanding Evaluation (GLUE)2 for Natural Language Processing (NLP) that have well defined ground-truth, because human consensus exists for the right answer. In addition, most conventional AI development approaches implicitly require human agreement to create such ground-truth data for development, training, and evaluation. However, establishing conventional ground truth in difficult domains is not possible because humans will often disagree significantly about the right answer. Rigorous assessment techniques remain critical for difficult domains; without them, the development and fielding of algorithmic systems in such domains is untenable. In the Moment (ITM) seeks to develop techniques that enable building, evaluating, and fielding trusted algorithmic decision-makers for mission-critical DoD operations where there is no right answer and, consequently, ground truth does not exist.
Specifically, DARPA seeks capabilities that will (1) quantify the alignment of algorithmic decision-makers with key decision-making attributes of trusted humans; (2) incorporate key human decision-maker attributes into more human-aligned, trusted algorithms; (3) enable the evaluation of human-aligned algorithms in difficult domains where humans disagree and there is no right outcome; and (4) develop policy and practice approaches that support the use of human aligned algorithms in difficult domains. Proposed research should embody innovative approaches that enable revolutionary advances in the current state of the art. Specifically excluded is research that primarily results in simply evolutionary improvements to the existing state of the art.
Additional information regarding this Broad Agency Announcement can be found at the following link:
All questions regarding this BAA or administrative issues should be emailed to ITM@darpa.mil.
Thank you for your interest in the Defense Sciences Office.