Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Hero Image

Artificial intelligence

NIST aims to cultivate trust in the design, development, use and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance safety and security and improve quality of life. NIST focuses on improving measurement science, technology, standards and related tools — including evaluation and data.

With AI and Machine Learning (ML) changing how society addresses challenges and opportunities, the trustworthiness of AI technologies is critical. Trustworthy AI systems are those demonstrated to be valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with harmful bias managed. The agency’s AI goals and activities are driven by its statutory mandates, Presidential Executive Orders and policies, and the needs expressed by U.S. industry, the global research community, other federal agencies,and civil society.

On October 30, 2023, President Biden signed an Executive Order (EO) to build U.S. capacity to evaluate and mitigate the risks of AI systems to ensure safety, security and trust, while promoting an innovative, competitive AI ecosystem that supports workers and protects consumers. Learn more about NIST's responsibilities in the EO and the creation of the U.S. Artificial Intelligence Safety Institute, including the new consortium that is being established.

NIST’s AI goals include:

  1. Conduct fundamental research to advance trustworthy AI technologies.
  2. Apply AI research and innovation across the NIST Laboratory Programs.
  3. Establish benchmarks, data and metrics to evaluate AI technologies.
  4. Lead and participate in development of technical AI standards.
  5. Contribute technical expertise to discussions and development of AI policies.

NIST’s AI efforts fall in several categories:

NIST’s AI portfolio includes fundamental research to advance the development of AI technologies — including software, hardware, architectures and the ways humans interact with AI technology and AI-generated information  

AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.

With a long history of working with the community to advance tools, standards and test beds, NIST increasingly is focusing on the sociotechnical evaluation of AI.  

NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are a priority for the use and creation of trustworthy and responsible AI.

A fact sheet describes NIST's AI programs.

The Research

Projects & Programs

Deep Learning for MRI Reconstruction and Analysis

Ongoing
The project is proceeding in three directions. Creating a new MRI reference artifact designed to assess geometric distortion using NIST’s MRI scanner. The artifact will be small enough to fit within the scanner with sufficient clearance to allow for variation in positioning within the scanner.

Emerging Hardware for Artificial Intelligence

Ongoing
Here is a brief description of our work with links to recent papers from our investigations, broadly classified as experimental and modeling. A brief overview of Josephson junction-based bio-inspired computing can be found in our review article. Experimental We have facilities to develop our devices

Embodied AI and Data Generation for Manufacturing Robotics

Ongoing
Objective To facilitate the adoption of AI-based robotic approaches in practical manufacturing scenarios by creating test methods that target AI-enabled robotic systems, evaluating the performance of AI-enabled robotic systems, and creating manufacturing-relevant and AI-centric datasets. Technical

Deep Generative Modeling for Communication Systems Testing and Data Sharing

Completed
After initial investigations with simulated datasets, we plan to develop generative models using real datasets. Potential applications of this work include generation of waveforms for interference testing, characterization of closed-box communication systems, and signal obfuscation for data sharing

JARVIS-ML

Ongoing
JARVIS-ML introduced Classical Force-field Inspired Descriptors (CFID) as a universal framework to represent a material’s chemistry-structure-charge related data. With the help of CFID and JARVIS-DFT data, several high-accuracy classifications and regression ML models were developed, with

Additional Resources Links

News

Minimizing Harms and Maximizing the Potential of Generative AI

As generative AI tools like ChatGPT become more commonly used, we must think carefully about the impact on people and society.

U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI

Department of Commerce Announces New Guidance, Tools 270 Days Following President Biden’s Executive Order on AI

NIST Announces Funding Opportunity for AI-Focused Manufacturing USA Institute

Bias in AI
Bias in AI
NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to agreement on understanding and measuring bias in AI systems.
Psychology of Interpretable and Explainable AI
Psychology of Interpretable and Explainable AI
 The purpose of this pre-recorded webinar, is to promote and more broadly share the release of NIST IR-8367, "Psychological Foundations of Explainability and Interpretability in Artificial Intelligence." This is a pre-recorded interview between the author of the paper, Dr. David Broniatowski, and a member of the NIST ITL team, Natasha Bansgopaul, asking key questions to highlight important insights from the paper that was published in April 2021.

Events

AI Metrology Colloquia Series

Thu, Sep 26 2024, 12:00 - 1:00pm EDT
As a follow-on to the National Academies of Science, Engineering, and Medicine workshop on Assessing and Improving AI