Just a Standard Blog
When social media platforms were first created, some companies had lofty goals of bringing people together.
To some extent, they succeeded. Social media has allowed people to connect. But it has also led to hate speech, violence, bullying, self-esteem issues in teenagers and other harms.
Decades into the social media era, it’s clear that new technologies come with both upsides and downsides.
Now, with the rapid growth of tools such as ChatGPT, Bing Chat, Bard and others, we have a chance to be more intentional at the outset. These tools are called generative artificial intelligence (AI) because they respond to user input or requests, such as questions, by predicting and generating content through the use of deep learning algorithms.
If you’ve used one of these tools to help you plan a trip or choose a dinner recipe, you may have found that moment just as exciting as discovering a new social media platform.
But maybe you also became concerned about the societal implications of this technology, such as job insecurity or misinformation. We have to be mindful and realistic about both the tremendous potential and frightening possibilities of this technology.
That’s why at NIST, we’re working with the technology community to put safeguards in place around all types of AI — not just the programs that generate text and images — so we can be more thoughtful about the impact AI will have on our society.
It may seem like generative AI came out of nowhere. But at NIST, we’ve been studying and thinking about AI for years. I’ve been studying machine learning for more than 20 years, with a specific focus on trustworthy AI — techniques to make sure that AI does not contribute to negative impacts on people and society.
More than a year ago, NIST started working with the AI community on a voluntary AI Risk Management Framework to help technology companies think through the ramifications of the products they are creating or launching. Our goal is to help society benefit from AI technologies, while protecting people from its harms.
We worked with tech companies, consumers, advocacy groups, legal scholars, sociologists and a host of other experts to think about the potential negative consequences of AI and how we can address them now — while the technology is in its relative infancy.
Thinking about how to test an AI system for not only whether it works, but also the effect it might have on individuals, communities and society — known as a socio-technical approach — is new for NIST and the research community.
We published the framework in January, and now we are working on benchmarks and testing approaches to measure the trustworthiness of AI technologies.
Since then, we’ve convened working groups to produce additional guidelines on a variety of generative AI-related areas. The guidelines we’re developing cover topics such as testing for language models, how companies should report a cyber incident, and how the public can know if a photo or video online is authentic.
For example, we know from the emerging trend of deepfake photos and videos that we won’t win a “cat and mouse” game with that content. What if, instead, we had some sort of authenticity marking, so you knew a photo or video you found was legitimate? It might operate in a similar way to how some social media companies “verify” a well-known person’s account. If we could somehow mark authenticity and get people into the habit of looking for that authenticity, we could potentially help minimize the impact of deepfakes.
Additionally, testing for the large language models these tools use to answer your questions is a significant challenge. If a tech company says, “Trust us, we tested this language model,” how do we know they really tested it to the best possible standards? We want to work toward an industry norm of accepted best practices that every platform can use to test its language models. If everyone follows the same testing protocols, the public will have more confidence in that language model’s output, and the tech community can collectively figure out where the gaps and challenges are in their language models.
These are the types of issues we’re trying to get ahead of.
We know this technology is going to continue to evolve faster than policy can, so we’re working to come up with a comprehensive set of guidelines that can be flexible enough to evolve as the technology changes.
The technology companies creating AI products are a partner in this process, and many of them have already agreed to abide by our guidelines voluntarily. These companies have an interest in their products being trustworthy (and being seen as such by the public), so they’ve been willing partners in these efforts.
Lord Kelvin said that if you cannot measure it, you cannot improve it. That’s how we are approaching the next phase of AI here at NIST.
Earlier in my NIST career, I developed an approach for measuring fingerprint image quality, which was selected as an international standard. While a technically challenging problem, I approached it as just that — a technological problem with a technical solution.
I have learned that we can’t just look at AI systems through a technical or computational lens. Generative AI, in particular, is a complex mix of data and computational algorithms, along with the humans and the environment that they all interact within. That’s why we have to look at the positive or negative consequences and risks of these systems.
We should not just build something because we can — something technologists sometimes do. We have to think about the impacts we may be creating. That’s how we’ve studied AI so far and how we will continue to look at AI and its potential positive or negative effects. We’re asking not just, does this AI tool work accurately? We also want to know how it might impact people. How can everyone benefit? How can we be sure everyone is part of the solution? These questions make our work a lot more human-centered.
What excites me about AI is its enormous beneficial impact on society — it can make our lives better when it works for everyone. When it comes to generative AI, we have a chance to do better than we did with social media. We can really think through the ramifications and put people at the center of these advancements.
Right now, generative AI is mostly a supplement to human judgment or a fun way to write jokes. In the future, it may help your doctor with diagnoses or have similar usefulness to people in their daily lives.
There’s no need to fear the AI present (and future). But it’s OK to approach it with a healthy dose of caution. My colleagues at NIST and I are working to make sure the technology works for people, not the other way around.
Google translation of this comment:
I really like drawing and making poetry with AI, Bing amuses me a lot as well as helping me do the things my mind imagines. I'm a Pisces and I think that's why we are compatible in that regard.
Yes, this is needed. Infact, we need a system o tool which can differentiate between ai and hi(human intelligence). Otherwise, there would be a big chaos.
Me gusta mucho dibujar y hacer poesía con IA , Bing me divierte mucho además de ayudarme a realizar las cosas que mi mente imagina. Soy piscis y creo que por eso somos compatibles en ese aspecto.