Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose. There may be references in this publication to other publications currently under development by NIST in accordance with its assigned statutory responsibilities. The information in this publication, including concepts and methodologies, may be used by federal agencies even before the completion of such companion publications. Thus, until each publication is completed, current requirements, guidelines, and procedures, where they exist, remain operative. For planning and transition purposes, federal agencies may wish to closely follow the development of these new publications by NIST. Organizations are encouraged to review all draft publications during public comment periods and provide feedback to NIST. Many NIST cybersecurity publications, other than the ones noted above, are available at http://csrc.nist.gov/publications.
Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system. NIST intends to develop methods for increasing assurance, GOVERNANCE and practice improvements for identifying, understanding, measuring, managing, and reducing bias. To reach this goal, techniques are needed that are flexible, can be applied across contexts regardless of industry, and are easily communicated to different stakeholder groups. To contribute to the growth of this burgeoning topic area, NIST will continue its work in measuring and evaluating computational biases, and seeks to create a hub for evaluating socio-technical factors. This will include development of formal guidance and standards, supporting standards development activities such as workshops and public comment periods for draft documents, and ongoing discussion of these topics with the stakeholder community.
We introduce four principles for explainable artificial intelligence (AI) that comprise fundamental properties for explainable AI systems. We propose that explainable AI systems deliver accompanying evidence or reasons for outcomes and processes; provide explanations that are understandable to individual users; provide explanations that correctly reflect the system's process for generating the output; and that a system only operates under conditions for which it was designed and when it reaches sufficient confidence in its output. We have termed these four principles as explanation, meaningful, explanation accuracy, and knowledge limits, respectively. Through significant stakeholder engagement, these four principles were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one-sizefits-all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.
The public safety community is in the process of transitioning from the use of land mobile radios (LMR) to a technology ecosystem including a variety of broadband data sharing platforms. Successful deployment and adoption of new communication technology relies on efficient and effective user interfaces based on a clear understanding of first responder needs, requirements and contexts of use. This project employs a two-phased data collection approach for an in-depth look at the population of first responders, along with their work environment, their tasks, and their communication needs. This report documents the data collection of Phase 1 and the resulting data analysis. Phase 1, the qualitative component, focuses on interviews with approximately 200 first responders (law enforcement, fire fighters, emergency medical services, communications/dispatch) across the country. The results include user needs and requirements expressed by first responders. These needs and requirements have been organized into five categories of technology opportunities. Further analysis identified six user-centered design guidelines for technology development. Finally, the importance the role of trust plays in first responders' adoption and use of communication technology is presented.
In the 2006 U.S. election, it was estimated that over 66 million people would be voting on direct recording electronic (DRE) systems in 34% of the nation's counties [8]. Although these computer-based voting systems have been widely adopted, they have not been empirically proven to be more usable than their predecessors. The series of studies reported here compares usability data from a DRE with those from more traditional voting technologies (paper ballots, punch cards, and lever machines). Results indicate that there were little differences between the DRE and these older methods in efficiency or effectiveness. However, in terms of user satisfaction, the DRE was significantly better than the older methods. Paper ballots also perform well, but participants were much more satisfied with their experiences voting on the DRE. The disconnect between subjective and objective usability has potential policy ramifications.
The public safety community has a unique opportunity to improve communication technology for incident response with the creation of the national public safety broadband network (NPSBN). Understanding the problems currently being experienced by first responders with communication technology as well as first responders' communication technology requests provides the basis for addressing and developing solutions to improve public safety communication. The National Institute of Standards and Technology Public Safety Communications Research usability team has conducted in-depth interviews with approximately 200 first responders representing 13 states in eight Federal Emergency Management Agency (FEMA) regions. The population sample includes urban, suburban, and rural locations, and various levels in the chain of command within the fire, law enforcement, emergency medical services, and communications center disciplines. This Volume 2 report is the second in a series of reports documenting the findings. A qualitative analysis of the transcribed interview data revealed thousands of problems currently being experienced by first responders and new functionality requests. Further analysis, with respect to current problems identified 25 distinct categories, with 1 729 quotes categorized across the four domains. The new functionality request data analysis resulted in 1 143 categorized quotes belonging to 18 categories. From the problems and requested functionality data, three major themes across the public safety landscape were identified in addition to discipline-specific topics that need to be addressed as future communication technology for first responders develops.
Abstract-Extensive research has been performed to examine the effectiveness of phishing defenses, but much of this research was performed in laboratory settings. In contrast, this work presents 4.5 years of workplace-situated, embedded phishing email training exercise data, focusing on the last three phishing exercises with participant feedback. The sample was an operating unit consisting of approximately 70 staff members within a U.S. government research institution. A multiple methods assessment approach revealed that the individual's work context is the lens through which email cues are interpreted. Not only do clickers and non-clickers attend to different cues, they interpret the same cues differently depending on the alignment of the user's work context and the premise of the phishing email. Clickers were concerned over consequences arising from not clicking, such as failing to be responsive. In contrast, non-clickers were concerned with consequences from clicking, such as downloading malware. This finding firmly identifies the alignment of user context and the phishing attack premise as a significant explanatory factor in phishing susceptibility. We present additional findings that have actionable operational security implications. The long-term, embedded and ecologically valid conditions surrounding these phishing exercises provided the crucial elements necessary for these findings to surface and be confirmed.
Abstract. Given the numerous constraints of onscreen keyboards, such as smaller keys and lack of tactile feedback, remembering and typing long, complex passwords -an already burdensome task on desktop computing systems -becomes nearly unbearable on small mobile touchscreens. Complex passwords require numerous screen depth changes and are problematic both motorically and cognitively. Here we present baseline data on device-and agedependent differences in human performance with complex passwords, providing a valuable starting dataset to warn that simply porting password requirements from one platform to another (i.e., desktop to mobile) without considering device constraints may be unwise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.